CommonSens PhD thesis Jarle Søberg A Multimodal Complex

UNIVERSITY OF OSLO
Department of Informatics
CommonSens
A Multimodal Complex
Event Processing
System for Automated
Home Care
PhD thesis
Jarle Søberg
© Jarle Søberg, 2011
Series of dissertations submitted to the
Faculty of Mathematics and Natural Sciences, University of Oslo
No. 1089
ISSN 1501-7710
All rights reserved. No part of this publication may be
reproduced or transmitted, in any form or by any means, without permission.
Cover: Inger Sandved Anfinsen.
Printed in Norway: AIT Oslo AS.
Produced in co-operation with Unipub.
The thesis is produced by Unipub merely in connection with the
thesis defence. Kindly direct all inquiries regarding the thesis to the copyright
holder or the unit which grants the doctorate.
Abstract
Automated home care is an application domain of rapidly growing importance for the society.
By applying sensors in the homes of elderlies it is possible to monitor their well being and activities of daily living (ADLs). Automated home care increases life quality by letting the elderly,
i.e., the monitored person, live in a familiar environment. In this thesis we present CommonSens, a multimodal complex event processing (CEP) system for detecting ADLs from sensors
in the home. CommonSens is modelled and designed to simplify the work of the application
programmer, i.e., the person who writes the queries that describe the ADLs.
System supported personalisation simplifies the work of the application programmer, and
CommonSens adapts to any environment and sensor configuration. During an instantiation
phase CommonSens analyses queries and uses late binding to select the sensors in the environment that are relevant for the query. In order to realise the personalisation, CommonSens is
based on three separate models: (1) an event model to identify states and state transitions that
are of interest in the real-world, (2) a sensor model to describe the capabilities of physical and
logical sensors, their coverage and the signal types they use, and (3) an environment model to
describe the physical dimensions of the environment and the impact they have on various signal
types. In order to approximate coverage of locations of interest (LoIs) in the home, CommonSens uses multimodality and can combine readings from different sensor types. In addition to
traditional query processing, CommonSens supports the concept of deviation detection, i.e., the
queries are interpreted as statements, or rules, which describe the desired behaviour as events.
When CommonSens detects that these rules are not followed, it sends a notification about the
deviation.
Through the implementation of CommonSens we evaluate three claims: CommonSens (1)
detects complex events and deviations, (2) processes data tuples in near real-time, and (3) is
easy to use and provides personalisation. We show these claims by using simulations based
on synthetic workload and trace files, as well as real-world experiments using real sensors and
real-time CEP. We show that CommonSens provides personalisation by instantiating the queries
differently depending on the current environment.
i
Acknowledgements
First of all, I would like to thank my excellent and conscientious supervisors Professor Dr. Vera
Goebel and Professor Dr. Thomas Plagemann.
Second, I would like to thank the whole gang at the Distributed Multimedia Systems research group. I hope to still being allowed to join the salary beers. I would especially like to
thank Azadeh Abdolrazaghi and Dr. Sebastien F. Mondet for proof reading and asking insightful
questions about things I had never though of. I would also like to thank Viet Hoang Hguyen for
helping me with the MICAz experiments. Through the years, I have also had many interesting
discussions with Morten Lindeberg. Some were also related to research. Dr. Matti Siekkinen
and Dr. Katrine Stemland Skjelsvik have helped me with their research experience, and have
also contributed in joint work. I also want to thank Radioresepsjonen for their podcasts. They
helped me to fall asleep during periods of high stress and to laugh out loud in situations where
such behaviour is considered rather eccentric, e.g. when sitting on the bus during rush hour.
Finally, I would like thank my wonderful wife Ingerid Skjei Knudtsen and my family (also
wonderful).
iii
Contents
Abstract
i
Acknowledgements
iii
1 Introduction
1.1 Problem Statement . . . . . . . .
1.2 A Brief Look at Additional Issues
1.3 Claims, Methods and Approach .
1.4 Contributing Papers . . . . . . . .
1.5 Structure of the Thesis . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
3
4
6
7
2 Background and Related Work
2.1 Automated Home Care . . . . . . . . . . . . . . . . .
2.1.1 Roles . . . . . . . . . . . . . . . . . . . . . .
2.1.2 Requirement Analysis . . . . . . . . . . . . .
2.2 Sensor Technology and Sensor Models . . . . . . . . .
2.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Event Models . . . . . . . . . . . . . . . . . .
2.3.2 Query Languages . . . . . . . . . . . . . . . .
2.3.3 Complex Event Processing and Personalisation
2.3.4 Spatial Issues . . . . . . . . . . . . . . . . . .
2.3.5 Deviation Detection . . . . . . . . . . . . . .
2.4 Discussion and Conclusion . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
10
11
13
15
15
19
20
22
24
24
3 CommonSens Data Model
3.1 Event Model . . . . . . . . . .
3.2 Environment Model . . . . . .
3.3 Sensor Model . . . . . . . . .
3.4 Query Based Event Language
3.4.1 Semantics . . . . . . .
3.4.2 Syntax . . . . . . . .
3.5 Discussion and Conclusion . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
31
32
35
35
38
40
4 Instantiation and Event Processing
4.1 Sensor Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Coverage Area Calculation . . . . . . . . . . . . . . . . . . . . . . . .
41
42
42
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
46
50
52
52
53
53
54
5 Implementation
5.1 Overview . . . . . . . . . . . . . . . . . . . .
5.1.1 Environment Model . . . . . . . . . .
5.1.2 Sensor Model . . . . . . . . . . . . . .
5.1.3 Query Language . . . . . . . . . . . .
5.2 Functionality . . . . . . . . . . . . . . . . . .
5.2.1 System Control . . . . . . . . . . . . .
5.2.2 Physical Sensor Creation and Placement
5.2.3 Event Processing Model Creation . . .
5.2.4 Event Processing . . . . . . . . . . . .
5.3 Discussion and Conclusion . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
58
58
59
61
63
63
67
70
74
77
6 Evaluation
6.1 Detecting Complex Events and Deviations . . . .
6.1.1 Functionality Tests . . . . . . . . . . . .
6.1.2 Real-world Evaluation . . . . . . . . . .
6.1.3 Trace File Evaluation . . . . . . . . . . .
6.2 Scalability and Near Real-Time Event Processing
6.3 Personalisation and User Interface . . . . . . . .
6.4 Discussion and Conclusion . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
. 80
. 81
. 91
. 99
. 100
. 104
. 109
7 Conclusion
7.1 Summary of Contributions . . .
7.2 Critical Review of Claims . . . .
7.3 Open Problems and Future Work
7.3.1 Open Problems . . . . .
7.3.2 Future Work . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4.2
4.3
4.4
4.1.2 Sensor Placement . .
Query Instantiation . . . . .
Event Processing . . . . . .
4.3.1 Query Evaluator . .
4.3.2 Query Pool . . . . .
4.3.3 Data Tuple Selector .
Discussion and Conclusion .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Bibliography
111
111
112
114
114
115
115
A Appendix
A.1 calculateError . . . . . . . . . . . . . . . . . . . . . . . . .
A.2 reduceRay . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 Functionality Tests Configuration . . . . . . . . . . . . . . .
A.3.1 Data Files from the Last Experiment in Section 6.1.2
A.4 Trace Files from Cook and Schmitter-Edgecombe . . . . . .
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
125
125
127
128
143
152
List of Figures
3.1
3.2
3.3
3.4
3.5
3.6
3.7
The relation of the core elements in our conceptual model of the real world.
Concurrent and consecutive atomic events. . . . . . . . . . . . . . . . . . .
Overview of the V , N and E sets. . . . . . . . . . . . . . . . . . . . . . .
Example of how a wall reduces the coverage area of a camera. . . . . . . .
Examples of capability hierarchies for detecting falls and taking medication.
Examples of allowed sequences in a P-registered query. . . . . . . . . . . .
Our query language as written in EBNF. . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
29
30
33
34
36
39
Life cycle phases and concepts of CommonSens. . . . . . . . . . . . . . . . .
Signals that are sent through the objects in two directions and which create
intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 The R EDUCE algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Coverage range divided into several intervals with different permeability values.
4.5 A model of a circle with a set of rays. . . . . . . . . . . . . . . . . . . . . . .
4.6 The rays are affected by an object and the coverage area is reduced. . . . . . .
4.7 Using physical sensors to approximate LoIA. . . . . . . . . . . . . . . . . . .
4.8 Examples of relations between sensors that give equivalent results. . . . . . . .
4.9 The F IND S ENSOR algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.10 Overview of the query processor in CommonSens. . . . . . . . . . . . . . . .
41
43
45
46
46
47
47
48
51
54
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
Key classes in the environment package. . . . . . . . . . . .
Key classes in the sensing package. . . . . . . . . . . . . . .
Key classes in the language package. . . . . . . . . . . . . .
The parsed version of IqC1 . . . . . . . . . . . . . . . . . . . .
Key classes in the modelViewController package. . . .
Main window in CommonSens. . . . . . . . . . . . . . . . . .
Environment creator in CommonSens. . . . . . . . . . . . . .
Classes involved in the calculation of reduced coverage area. .
Before and after the reduceRay() method has been called. .
Key classes in the eventProcessor package. . . . . . . . . . .
Instantiation of a box with atomic queries. . . . . . . . . . . .
Overview of the event processing phase. . . . . . . . . . . . .
Mixed matching versus uniform matching. . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
58
59
61
63
64
64
66
69
70
71
71
75
77
6.1
6.2
Environment instances used in functionality tests. . . . . . . . . . . . . . . . .
LoIs used in functionality tests. . . . . . . . . . . . . . . . . . . . . . . . . . .
83
85
4.1
4.2
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
6.14
6.15
6.16
Movement pattern classes used in functionality tests. . . . . . . . . . . .
Nine possible locations in the environments. . . . . . . . . . . . . . . . .
The environment in CommonSens and in the real world. . . . . . . . . .
Comparison of received and calculated signal strength. . . . . . . . . . .
Real world experiments with only one camera covering LoI1 and LoI2. .
Real world experiments with three cameras covering LoI1 and LoI2. . . .
Overview of the hallway and location of cameras. . . . . . . . . . . . . .
Results from the hallway experiment. . . . . . . . . . . . . . . . . . . .
Processing time with 6, 66, 126, 186, and 246 sensors in the environment.
Processing time with an increasing number of concurrent queries. . . . .
Average processing time for atomic queries in the functionality tests. . . .
Two environments with different setup. . . . . . . . . . . . . . . . . . .
Excerpts from the hallway with the new LoI Hallway. . . . . . . . . . .
Results from the four LoIs that are turned into Hallway. . . . . . . . . .
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
87
91
93
94
95
97
98
101
103
104
106
107
108
List of Tables
6.1
6.2
6.3
6.4
6.5
Workload types and sections they are used. . . . . . . . . . . . . .
Return values from the functionality tests and their meaning. . . . .
Mapping between movement pattern classes and movement patterns.
Results from functionality tests 178 to 182. . . . . . . . . . . . . .
Results from functionality tests 172 and 173. . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
80
82
84
90
90
A.1
A.2
A.3
A.4
A.5
Regression tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Complex queries cq1.qry to cq23.qry, which are used in the regression tests. .
Complex queries cq24.qry to cq34.qry, which are used in the regression tests.
Complex queries cq35.qry to cq74.qry, which are used in the regression tests.
Regression test results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
135
136
137
138
143
ix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 1
Introduction
The increasing ratio of elderlies in the world requires alternatives to the traditional home care
that we have today, since this changing ratio means that there are many more persons to be
taken care of and less persons to perform this task. Hence, there is a need for alternatives that
automate the home care application domain. Recent development of sensor technology has
paved the way for these alternatives. Automated home care, or ambient assisted living, consists
in placing sensors in homes of elderlies and use systems that obtain and process the information
from the sensors. If the system detects alarming situations, it can either send notifications to
the person being monitored or to the helping personnel. Automated home care systems can
increase the threshold for hospitalisation and for putting elders in retirement homes. Instead
of being placed in retirement homes, the elderlies can live in their homes and be monitored by
sensors. This means they can live in familiar environments, i.e., their homes, while feeling safe.
The placement and capabilities of the sensors are customised for the well being of the elderly.
Sensor readings are low level data created by converting analogue signals to digital values.
Even though these signals are the foundation of the information about the elderlies, it is important to have a framework that manages to handle all the sensor signals and filter out all the
sensor signals that are not interesting. In addition, the framework must have a simple higher
level interface that abstracts away the low level sensor data. One possible approach is to use
the concepts from complex event processing (CEP) to handle this. CEP is a technology that
allows us to write declarative queries that are used to continuously filter interesting events from
a stream of data tuples from sensors.
In this thesis we present CommonSens, a multimodal CEP system for automated home care.
CommonSens detects complex events and deviations from complex events. In addition, CommonSens provides personalisation, i.e., it adapts a query plan to the current home, sensors and
monitored person. Personalisation is an important contribution for automated home care. This
is because there are many similarities between the instances and one type of event may apply to
many different persons. Fall detection and medication taking, i.e., making sure that the person
remembers to take his medication, are examples of such common, yet important, events. By
personalising, it is possible to reuse or alter queries that describe the same events. Personalisation can be done by adapting the event description to the environment, the available sensors,
and the needs of the monitored person.
Automated home care today is either based on closed and proprietary solutions or there is
limited communication between the sensors, i.e., the water leakage sensor starts beeping when
1
it recognises water and the motion detector starts beeping if it is night time and someone is
in the living room. Our goal is to achieve an open solution that uses simple, comprehensive
and domain related models. As part of the personalisation goal, the models facilitate reuse of
model instances, and separation of concern, which are important design principles in computer
science.
1.1 Problem Statement
In automated home care there are several issues that have to be investigated. When using an
open system, one cannot expect a homogeneous set of sensors. If the system was proprietary,
we could have defined explicitly how the sensors should be configured. We could also have
decided that only certain types of sensors could have been used for detecting given events. For
instance, only one type of camera can be used, simplifying the process considerably. With
an open system, we can not make such assumptions. It is expected that the set of sensors is
heterogeneous, and given heterogeneity, we need models that address the important properties
for the sensors that are used in our domain. In order to handle heterogeneity, it might be easier
to address what type of events we want instead of addressing the sensors directly. The system
identifies the sensors that are most appropriate by performing a late binding between the event
type and the sensors that are located in the home.
The homes and the persons in the homes are different as well. Therefore, it is important to
have a system that manages to adapt to the various instances, i.e., homes, sensors, and persons.
It is unrealistic just to have one simple system that is completely reconfigured for each home.
Such work consumes too much resources and takes too much time to configure. However, there
are similarities between homes and persons as well. Homes consist of the same types of rooms
and areas that might be interesting to monitor. Persons might suffer from similar conditions.
These aspects have to be addressed and used as part of the system deployment.
The issue of sensor placement has to be considered as well. For instance, the coverage areas
of sensors are affected by the environment. A concrete wall reduces the coverage of a camera
since the camera uses light, whereas a radio based sensor can send signals that pass through the
wall. Given the fact that the sensors are heterogeneous, it is important that there exist ways of
modelling any coverage area and still be able to use the system to investigate how these sensors
should be placed in the home. This is done in order to obtain as much information as possible.
Sometimes we also need to combine the information from several heterogeneous sensors in
order to obtain sufficient information about the monitored persons, for example his location.
In automated home care, there are issues regarding networking and data collection. Relying
on only one type of communication simplifies the data collection. On the other hand, it reduces
the possibilities of using other types of communication. For instance, having only wired sensors
in the home is simple. However, if the monitored person wears a sensor, the communication
has to be wireless as well. Therefore, a system for automated home care has to be general and
has to support all possible types of networking and data collection.
Heterogeneous sensors and homes complicate the data processing, and, as noted above,
it is not possible to use an automated home care system that does not take these issues into
consideration. Sensors only report lower level information, and it is not sufficient to rely on only
2
one type of sensor. Therefore, it is imperative that the system manages to handle the lower level
sensor readings, and combines and relates these readings into meaningful events. This can be
achieved by using a CEP system. However, choosing to use a CEP system to solve these tasks is
a challenge, since there exist no CEP system that handles all the issues related to heterogeneity,
sensor placement, networking, data collection, and processing. On the other hand, CEP is a
promising technology, since it allows us to write queries that describe the observed behaviour
of the monitored persons.
Using the concepts from CEP also simplifies the work of the application programmer. The
application programmer does not necessarily have a computer science background. Hence,
lower level sensor programming is not an acceptable solution. In addition, we assume a wide
deployment of automated home care solutions in the years to come. Therefore, it is important
to provide a simple and declarative query language that allows this wide deployment of queries.
By utilising many of the similarities in the home care application domain it is possible to provide
the simple and declarative query language and still provide personalisation, i.e., adapt a query
plan to the current home, sensors and monitored person. This means that the CEP system
has to handle the issues that we have described above, while still simplifying the work for the
application programmer. The discussion leads us to the following problem statement:
In automated home care, there is a need for a CEP system that handles the domain specific challenges related to automated home care. The CEP system should
simplify the work for the application programmer, and automatically handle personalisation by adapting to the current sensors and homes.
The application programmer should be part of the planning, the query writing, and the sensor placement. This means that the work done by the application programmer is very important
for the well being of the elderly person. In order to further simplify the work for the application programmer, we have to investigate alternatives to traditional query processing. Traditional
query processing consists of writing queries that describe what we want. For instance, if the
application programmer wants to detect falls, he has to describe the fall through a query. An
alternative is to use deviation detection, i.e., the application programmer writes queries that
describe expected behaviour. Only if there are deviations from the expected behaviour, CommonSens sends a notification.
1.2 A Brief Look at Additional Issues
Automated home care is a large application domain, and in this section we present two of
the many issues that are too broad for the scope of this thesis. The first important issue is
related to the correctness of the sensor readings, i.e., how to obtain correct information from
the sensors. Sensor readings can be erroneous and noisy, which means that one has to apply
signal processing techniques to the readings before the results can be presented. However, in
our work we assume that all the sensor signals and the information we obtain from the sensors
are correct. For instance, when the temperature sensor tells that it is 21◦ C we assume that this
is the actual temperature. On the other hand, this assumption may compromise the immediate
utilisation of certain sensors, i.e., we depend on a layer between the sensor and the creation of
3
data tuples which processes and corrects the signals before the values are reported. Despite this
assumption, we show in Chapter 6 that it is still possible to perform real experiments with real
sensors.
The second important issue is related to the psychosocial effects that constant monitoring
might have on people, especially privacy issues related to the fact that the monitoring is performed inside homes. In addition, monitoring can be used without the consent of the individuals. For instance, people who suffer from dementia might not be aware that they are monitored.
Another example is that sensor readings might be distributed to people who should not have
access to the data. Our field of expertise is not on psychosocial issues and privacy considerations. However, given the way CommonSens is designed, we implicitly take some precautions.
For instance, we want much of the data from the sensors to be processed by a computer that
is located inside the home. Only relevant information should leave the home, i.e., notifications
about the events or deviations from events that the application programmer has defined. This
means that raw sensor data is never visible for humans. For instance, video streams are never
sent out the home, only the notifications, if any.
1.3 Claims, Methods and Approach
State of the art in automated home care consists of systems that do not integrate concepts like
event processing, sensors, and sensor placement. For instance, research on event processing
in automated home care [SBR09] does not discuss the practical issues that follow when the
sensors should be placed in the home, and how objects like walls and furniture affect the sensor
readings. These issues are discussed in other work [BF07], however, they no not discuss how
they can use this information to place sensors more properly in the environment. In our work,
we integrate models for events, sensors, and the environment the sensors are placed in. This
gives an holistic view of the domain and the issues involved. This also includes providing
a query language that extends related work by facilitating personalisation and by letting the
application programmer query complex events and deviations from complex events.
In this section we present our claims, scientific methods and approach. They are deducted
from the problem statement and depend on each other. We argue for the importance of our
claims, which are related to the requirements (presented in Chapter 2). Our claims are as follows:
1. CommonSens detects complex events and deviations. It is important that the event
processing works as intended. This happens when CommonSens manages to correctly
identify sensor readings that match a set of queries. Although we assume that the sensor readings are correct, the processing of the sensor readings has to be correct as well,
and this has to be handled by CommonSens. The application programmer might want
to be notified about complex events, which means that only one event is not sufficient
to create a notification. Complex events typically consist of sets of events that can be
described through logical operators, concurrency classes and consecutive relations. A
complex event might be that the monitored person opens the kitchen door and turns on
the light before making breakfast. While the monitored person makes breakfast, he also
has to be located in the kitchen. If one of these events are not discovered and processed
4
correctly, the processing of the whole complex event might fail since the event processing depends on this one single event. Traditional complex event processing consists of
writing queries that describe the complex events that we want to know that occur [esp].
On the other hand, deviation detection turns the whole process upside down. Only if
there are deviations from the complex events, CommonSens sends a notification. This is
a strong addition to traditional complex event processing, and simplifies the work of the
application programmer, since he does not have to write queries about everything that can
go wrong. It is sufficient to describe the expected events. If the event processing does not
work correctly, i.e., that single events might not be processed correctly, a consequence is
that many unnecessary notifications are sent.
2. CommonSens processes data tuples in near real-time. It is important that CommonSens returns notifications about the events as soon as possible after the events have occurred, i.e., that the event processing happens in near real-time. If the event processing is
too slow, it may happen that not all sensor readings can be obtained, or they might be lost
due to buffer overloads. This can result in situations where events are not detected even
if they occurred. This is related to Claim 1; all the events have to be processed correctly.
3. CommonSens simplifies the work for the application programmer and provides personalisation. As stated in the problem statement, CommonSens is designed with personalisation and simplicity in mind. Personalisation allows the application programmer to
make small changes on already existing queries. These small changes are adapted to a
new environment, other types of sensors and other persons. Related work provides personalisation as well [WT08], but they do not include all the issues related to adapting to
different homes and sensors. The deviation detection mentioned in Claim 1 also simplifies the work of the application programmer, and the implementation of CommonSens
allows the application programmer to create virtual environments, test sensor placement
and emulate the event processing as well.
We achieve our claims through methods like modelling, designing, implementing and experimentally evaluating CommonSens. The modelling part consists of defining a set of models for
events, environments and sensors. By using models, we can identify the important properties
that apply to our application domain. For instance, it is important to know about the coverage
area of a sensor and how the objects in the environment, e.g. walls, affect the coverage area
when the sensor is placed there. In order to simplify the work of the application programmer,
we define a declarative query language that uses abstractions instead of directly addressing sensors. This means that the application programmer only needs to address locations of interest
(LoIs), e.g. the kitchen, the type of events he wants from the kitchen, and the temporal properties, i.e., when and for how long the events are supposed to occur. The application programmer
can use the language to describe that events should occur concurrently and consecutively. In
addition, the application programmer can state that the query processor should only return a notification if there are deviations from the query. CommonSens automatically instantiates a query
by investigating the environment and sensors in the environment and performs late binding between the query and the sensors. This is done by only addressing capabilities in the queries.
CommonSens also detects if sensors that provide these capabilities are available in the current
5
home.
CommonSens is designed as a complex event processing system, i.e., its main purpose is to
efficiently process data tuples that it pulls from the sensors. In order to avoid a large number
of data tuples, CommonSens only pulls those sensors that are relevant for the current queries.
When the set of data tuples match a complex query, a notification is sent. If the application
programmer has stated that he is only interested in deviations instead, a notification is sent only
if the query is not matched.
The proof-of-concept implementation of CommonSens supports much of the concepts that
are defined during modelling and design. We evaluate our claims by testing use-cases, queries,
environments, sensors and personalisation in our implementation. In addition to emulate user
patterns and sensors, our implementation allows us to perform experiments with real sensors.
This shows that the concepts that CommonSens is based on can be utilised in real scenarios and
not only through simulations.
1.4 Contributing Papers
In order to achieve our claims, we have made important contributions to the automated home
care application domain. Our contributions are as follows:
• We define models for events, environments and sensors and show how these models facilitate personalisation through our query language.
• We have introduced a way to detect events by letting CommonSens make use of the sensor
placement and the environment properties.
• We introduce the concept of deviation detection as an alternative to explicitly describing
dangerous and critical situations through queries.
These three contributions are presented in the following four papers, which form the basis
of this thesis. Below we shortly summarise each of the four papers.
• To Happen or Not to Happen: Towards an Open Distributed Complex Event Processing System [SGP08]. This paper paves the way for CommonSens by introducing our
declarative and multimodal approach. In addition, we stress the importance of support
for consecutive and concurrent events, as well as deviation detection. Being an early contribution in our portfolio of papers, the paper discusses issues like distribution and state
automata, which we have not included in the final version of CommonSens.
• CommonSens: Personalisation of Complex Event Processing in Automated Home
Care [SGP10a]. In this paper we present our models and show how CommonSens supports personalisation. We evaluate personalisation by using two environments and showing how the same query can be instantiated in both, even though the two environments
use different types of sensors. We believe that the personalisation simplifies the work of
the application programmer since he only needs to do small changes to the queries. In
addition, the queries address LoIs, which is an abstraction that applies to many homes.
6
• Detection of Spatial Events in CommonSens [SGP10b]. Spatial events occur in the
home and are detected by sensors that cover certain areas. We show how to combine
readings from several sensors in order to approximate LoIs in the home. In addition,
we show how CommonSens can use different propagation models to determine how the
coverage areas of the sensors are affected by objects in the environment. The application
programmer can model these environments on a computer and find optimal sensor placement before placing the sensors in the home. This simplifies the work of the application
programmer since it allows the application programmer to simulate different situations in
the environment.
• Deviation Detection in Automated Home Care Using CommonSens [SGP11]. Many
systems focus on describing the dangerous situations that can occur in the home, e.g., that
the monitored person falls or does not get up in the morning. In this paper, we want to
show that it is possible to turn this issue around. Instead of describing everything that can
go wrong, we focus on describing the expected events instead. Only when the expected
events do not occur, something might be wrong and needs special attention. This contribution simplifies the work of the application programmer since he only has to explicitly
describe the expected events. CommonSens automatically detects the deviations.
1.5 Structure of the Thesis
In Chapter 2 we present the background for our work and relate our contributions to other
work. We argue for automated home care by referring to documentation of the increasing ratio
of elderlies. We present the requirements of successful deployment of automated home care
systems. We discuss sensor technology and different concepts related to events, and point out
the shortcomings of related work. In Chapter 3 we present our models for events, environments
and sensors. In addition, we introduce our query language. Chapter 4 shows how the models
are used and how they interact. The context of the presentation is related to the life cycle
of CommonSens, from utilisation of model instances to placement of sensors, instantiation of
queries and complex event processing. The concepts of CommonSens are realised through a
running implementation, which is described in Chapter 5. Our three claims are evaluated in
Chapter 6. Finally, we conclude and address future work in Chapter 7.
7
Chapter 2
Background and Related Work
In this chapter we describe the technologies this thesis builds on. Throughout the chapter,
we relate our contributions, i.e., models for events, sensors, and environments, and a query
language, detection of spatial events, and deviation detection, to other work. First, we give
an overview of automated home care. We define relevant terms and roles for this application
domain. We discuss the user and system requirements that we consider relevant for our problem
statement. Second, we discuss sensor technology and give an overview of different types of
sensors. We also relate our sensor model to related work. Finally, we present the concept
of events, which play an important part of our work. The section discusses topics like event
models, and query languages. In addition, the section discusses complex event processing,
spatial issues, i.e., detecting spatial events, and deviation detection. For each of the topic, we
also refer to related work and point out where our contributions extend the current state of art
in the field.
2.1 Automated Home Care
In 2009, the U.S. Department of Health and Human Services issued a report concerning the ageing of the world population [KH09]. In the whole world the proportion of the older population
increases and by 2040, it is estimated that 14 percent of the global population will be aged 65
and older. Today, the developed world has the largest population of people aged 65 and older.
However, it is estimated that the developing countries will follow, and between 2008 and 2040,
several developing countries will experience a growth of over 100 percent concerning people in
that age span. These numbers show that there will be challenges related to taking proper care
of the elderlies since care giving requires that there are enough people and resources available.
Given this uneven distribution of ages, care giving institutions have to change and investigate
alternative solutions in order to maintain life quality for the ageing population.
A possible solution is automated home care, or ambient assisted living (AAL). The recent
advances in sensor technology have made it possible to envision scenarios that involve sensors which are placed in a home and which monitor the persons living there. If carefully and
correctly used, these sensors can obtain information that can be used to save the person from
harmful conditions, or at least send alarms when such conditions occur or are about to occur.
Examples of harmful conditions are falls or situations where the person does not behave as ex-
9
pected, e.g. not getting up in the morning. The fact that harmful conditions can be identified by
ubiquitous and ambient sensors increases the quality of living because the person feels safe in
the home [ML08].
Using sensors in the home to monitor persons does not only apply to older persons. Such solutions can also be used to heighten the threshold for hospitalisation in general. People suffering
from chronic diseases or other illnesses can benefit from sensors that monitor their well being
and health. In addition, preventing people from unnecessary hospitalisation is economically
beneficial for the society.
Due to its relevance for the ageing population in the whole world, automated home care is
a well-accepted research field. For instance, the European AAL Joint Programme1 , which is
implemented by funding authorities of several European countries, has a total budget of e700
million for the years 2008 to 2013. The focus and resources used in automated home care has
resulted in several systems and system designs that address the various issues related to this
domain.
In the following section, we define the roles in automated home care and analyse the requirements of the application domain.
2.1.1 Roles
In this section we present the roles that we find relevant in our work. We have found three roles
that are especially important in our application domain:
1. Monitored person. The monitored person is the individual who lives in the home and
needs special care. The monitored person is observed and monitored by the sensors that
are placed in the home. Our main focus in this work is on monitored persons who are
elderly. However, automated home care does not solely apply to elderlies. Automated
home care also applies to persons who suffer from various types of illnesses like chronic
diseases, and who need to be constantly monitored.
2. Application programmer. The application programmer is the person who implements
the automated home care application. The application is a set of queries that are adapted
to fit the needs of a monitored person. The application programmer does not have an extensive knowledge in low level programming of sensors and needs a high level interface
to write queries and instantiate the environment, i.e., the home. The instantiation of the
environment consists of defining the walls, furniture, doors, and other important objects
that belong to the home. The application programmer places the sensors in the home of
the monitored person, and by instantiating the environment on a computer, the application programmer knows where to place the sensors in the real home. This is important
since the placement of sensors decides what type of information they obtain. In addition,
several sensors can cooperate in order to obtain even more information, i.e., the sensor
readings can be aggregated.
3. Helping personnel. The helping personnel are the persons who interact with the monitored person in order to obtain knowledge about his/her personal requirements. The help1
http://www.aal-europe.eu/
10
ing personnel communicate with the application programmer in order to inform about
which queries to use. If it is not possible to reuse queries, the helping personnel give the
application programmer enough information to write new queries. If the sensors in the
home detect events that indicate that the monitored person needs any assistance, a notification should be sent to the helping personnel so that they can interact with the monitored
person.
Our main goal is provide a complex event processing system that simplifies the work for
application programmer. However, it is important to be aware of the existence of helping personnel as well, since their work is to interact with both the monitored person and the application
programmer. Although there might be situations where several monitored persons live together
in one home, we focus on scenarios where only one monitored person lives in a home.
2.1.2 Requirement Analysis
Our requirement analysis is performed by investigating related work in automated home care.
Current research and observations in the field of automated home care emphasise on four requirements; two user requirements and two system requirements. The requirements coincide
with existing research [TBGNC09, MWKK04, WT08, WSM07, ML08] as well as official considerations in the field [LAT05].
The simplification of the work for the application programmer should not happen at the
expense of the well being of the monitored persons. The user requirements are as follows:
1. Safety. The motivation for using automated home care is to provide safety for the monitored person. We assume that feeling safe in a familiar environment increases the quality
of life for the monitored person. This requirement explicitly facilitates solutions for detecting possibly dangerous situations [TBGNC09].
2. Non-intrusive deployment. The deployment of a system that provides automated home
care should not interfere with the daily life of the monitored person [MWKK04]. It is
important that, by staying in the home, the monitored person should feel independent
[LAT05] and in control of his environment.
Safety is maintained by verifying that the system correctly detects possibly dangerous and
critical situations. If a dangerous and critical situation is detected, e.g., that the monitored
person falls, the helping personnel is alarmed immediately. Other situations are not dangerous,
and a reminder to the monitored person might be sufficient. This applies to situations where the
monitored person for instance has forgotten to take his medicine. The reminder can be an alarm
in the home or a pre-recorded voice telling what the monitored person has forgotten to do. In
other cases it is sufficient that the helping personnel calls the monitored person by using phone
or other communication devices.
To provide a non-intrusive deployment, communication between the monitored person and
the helping personnel should, unless an emergency situation occurs, be limited to telephone and
web pad (a pad connected to the Internet using the Web) [MWKK04]. A web pad solution is
for instance implemented in the work presented by Walderhaug et al. [WSM07], where persons
11
suffering from dementia are reminded about daily activities like taking medicine, as well as
telling if it is day or night. The monitored person should not directly provide data about his
status. Instead, the monitored person should be reminded about certain tasks if he does not
respond as expected. Such a solution is also suggested by Laberg et al. [LAT05]. Laberg et
al. state that only established and well-known interfaces should be used. This might exclude
using technical devices like web pads, since elders today might not be familiar with such an
interface. By having non-intrusive deployment, it is important to note that the deployment needs
special consent, especially if the monitored person suffers from dementia or similar cognitive
disabilities [LAT05]. Along with this point, it is also important that the control and ownership
of the data collected by the automated home care system is well-defined [ML08], since there are
privacy issues that have to be taken into consideration during monitoring. Finally, non-intrusive
deployment should not require cognitive efforts from the monitored person, e.g. that they have
to instantiate the system themselves [MWKK04].
In recent deployment of automated home care solutions, it has been reported that “the work
has been more calm and structured” for the helping personnel [LAT05]. In order to sustain this
impression among the helping personnel, it is essential that they can obtain knowledge about
the current system deployment when a critical situation has happened and they arrive in a home.
In addition to what the system reports, this might for instance be information about the location
of the person. When the helping personnel arrive in an apartment or house, they should be
able to obtain documentation of the system functionality [LAT05] or have knowledge about this
beforehand.
The system requirements are related to how we expect CommonSens to be. One system
requirement relates to the application programmer and one system requirement relates to the
event detection. The system requirements are as follows:
1. Personalisation. Even though a consequence of successful development of automated
home care solutions imply a large scale deployment, every person is different, and has
special needs and requirements that have to be met [WT08]. These needs may also change
over time. The efforts in developing a solution for monitoring a single monitored person
should be reduced.
2. Near real-time detection of dangerous and critical situations. When something dangerous or critical occurs, it is important that the system detects this immediately and that
the delay is minimal.
In addition to increasing the quality of life for the monitored person, automated home care
should simplify the work for the helping personnel and the application programmer. For instance, we can not require that the application programmer has extensive knowledge about low
level sensor programming. Such a requirement might limit the number of people who can interact and work in the automated home care application domain, since application programmers
and helping personnel need to be educated in health care as well.
One important argument for personalisation is that custom solutions for every monitored
person require more economical resources, i.e., it costs more money. For instance, personalisation might be implemented by providing a repository of query templates. These templates
12
can be adapted and modified depending on the needs of the monitored person [LAT05]. However, in order to save money this setup might need to cover large parts of the population that
needs to be monitored. In addition, personalisation needs to be supervised by professionals
like helping personnel. This is important since there may be a difference between what the
monitored persons need and what they want [ML08]. Hence, the monitored person should not
communicate directly with the application programmer. A final issue with personalisation is
that the automated home care application domain has to adapt to changing user behaviour over
time [WT08]. New user patterns need to be detected, and this means that new sensors have to
be placed in the home. In addition, if the sensors are battery powered, the batteries have to be
replaced occasionally.
2.2 Sensor Technology and Sensor Models
In recent years sensor technology has evolved into a resource which can be used in many application domains. For instance, given the efforts in efficient utilisation of research fields like
sensor networks [ASSC02], sensors have become small and efficient. The size of sensors has
decreased drastically, and commercial sensor based solutions like RFID have been successfully
deployed in many every-day situations, like registering tickets on the bus or unlocking doors in
office buildings. Sensors can also be used to measure air temperature and relative humidity in
forests [TPS+ 05] and vineyards [GIM+ 10]. Modern cars are equipped with several sensors that
measure oxygen level for tuning the ratio of air and fuel, as well as vehicle motion [SH09]. In
addition, meteorological sensors can measure solar radiation and air pressure [BR07], and there
even exist glucose sensors that can be implanted under the skin and harvest energy from, e.g. a
wrist watch [HJ08].
This section explains sensor technology and places sensors into three categories: (1) RFID
tags/readers, (2) programmable sensors, and (3) sensors that need to substantially process the
data to give meaningful results. Sensors can be defined differently, but one possible interpretation is that sensors are “devices for the measurement of physical quantities” [BR07]. Depending on the sensor type, this means that a sensor measures various types of physical values in
the environment, for instance light signals and temperature. In our work we especially focus
on sensor types that can be used within the automated home care application domain and the
main resource in our application domain is the smart home. The smart home is equipped with
sensors that obtain information states in the home. A state can for instance be associated with
the current temperature in a given room, or the location of the monitored person.
The first category is based on RFID (radio-frequency identification). RFID is based on the
communication between a reader and a tag. When the tag is located within the coverage area
of the reader, it responds to a radio signal that the reader sends. The tag can either be active,
semi-active, or passive. The active and semi-active tags are attached to a power source, e.g.
a battery. The passive tag harvests energy from the reader and uses that energy in order to
respond. The response contains information that identifies the tag. This identification can be
a unique number, which can be used to identify objects in the kitchen [NBW07] or persons
who wear the tag. We have chosen to put RFID technology in its own category, since it always
requires the reader/tag pair in order to work.
13
The second category is the programmable sensor. A programmable sensor mostly consists
of three mains parts: (1) the sensing device, i.e., an AD (analogue-to-digital) converter that
measures the physical quantity and turns it into a digital value. The digital value is a simple
data type like an integer, e.g., the temperature. (2) the computation of the digital value, and (3)
the communication to other sensors or a base station. The sensing device can range from simple
temperature sensors to accelerometers and humidity sensors. The information from the sensing
device is computed by a processor with memory attached. The processor can be a simple but
power-efficient CPU or more complex and power demanding. The communication between
the sensors can either be wireless or wired. In addition, the sensors can either be powered
by batteries or plugged to the power line. Since the sensor is programmable, it is possible to
either program the sensor directly or flash the memory with a bootable image. The former can
be done by using the Contiki operating system [con], while the latter can be done during the
deployment of TinyOS [tin]. Examples of sensor types that fall into the second category are
the TelosB, MICA2 and MICAz motes [xbo] and the FLEX mini motes [evi]. Mobile phones
are examples of devices that contain several programmable sensors. Modern mobile phones
have sensors like accelerometers and temperature, and provide applications that allow the user
to read the information from the sensors. Users can program their own applications and in that
way utilise the sensors that the mobile phone provides. In addition, the applications can use the
communication part to send the data to other mobile phones or computers and receive data.
The third category is related to sensors that need substantial processing of the digital values.
While the sensors in the second category produce single data values when they are pulled,
the sensors in the third category return more complex data types, like arrays or data objects.
Multimedia sensors like cameras and microphones fall into this category. Cameras produce
images, which can be represented as a sequence of two-dimensional arrays. The sound that
comes from a microphone does not give any meaning before it is represented as an array of
sound samples.
Since sensors might have limited processing power and might have limited supply of energy,
utilisation of the communication part is an important issue in sensor technology. Research on
wireless sensor networks addresses communication as one of the main challenges in the domain
[YG03], since the process of sending one packet takes much energy. This means that it is not
appropriate to continue resending packets that are lost. In addition, since the packets are sent in
the wireless channels, there might be collisions if the communication is not synchronised. Several communication protocols have been developed for sensors, including standards like IEEE
802.15.4 [iee], ZigBee [zig]. In the home care application domain it is possible to avoid these
communication issues by connecting the sensors to the power line in the house. Commercial
industry standards like X10 use the power line for communication as well. On the other hand,
in automated home care, the monitored person might have to wear sensors, and this means we
have to consider that the sensors might use both wireless and wired communication.
Sensor models are used to identify relevant properties of sensors. For instance, the OpenGIS
Sensor Model Language (SensorML) is used to describe sensors and sensing processes [BR07].
The language describes processes and process chains related to sensors. SensorML aims to
cover many of the issues that we have chosen to relate to the environment model. For instance,
in SensorML, the location and orientation of the sensor can be defined. In addition, the coverage
of a sensor is defined either by being in-situ or remote. The in-situ sensors measure the object
14
they are attached to, for instance a temperature sensor in a room. The remote sensors measure
physical quantities that are distant to the sensor, i.e., they have a coverage area. SensorML uses
the term capability about the type of output a sensor produces. In our sensor model, a capability is a description of what the sensor can observe. However, the capability in SensorML
also contains temporal properties like sampling frequency. Since the application domain is not
explicitly defined, the resulting sensor model is complex and comprehensive. SensorML and
its components are defined using UML and XML. Our sensor model uses some of the elements
in SensorML, and we do still not know if there exist elements in our models that SensorML
does not cover. However, our sensor model is application domain specific and contains mostly
concepts that are relevant for automated home care. For instance, we define sampling frequency
as a separate property and not part of a capability. In addition, by combining our sensor model
with our environment model, we can define orientation and placement or sensors. In CommonSens, we extend the traditional view of sensors by explicitly dividing our sensor model into
three different types. Note that these types should not be confused with the three categories
presented above. The first sensor type is the physical sensor, which transforms the analogue
signals into digital values. The second sensor type is called a logical sensor, which aggregates
data from other sensors, and depends on input from the other sensors to work. The third type of
sensor is the external source, which stores static data. This distinction of sensor types has not
been done in related work, and gives a better overview of the functionality that is expected from
the different types of sensors.
2.3 Events
CommonSens is a complex event processing system that obtains data tuples from sensors and
processes them according to a set of queries. In order to understand how CommonSens works
it is important to have a fundamental understanding of events and issues related to this concept. Hence, in this section we first introduce events by referring to different interpretations,
definitions, and models. Second, we discuss query languages that are used to describe complex events. Third, we introduce the concept of complex event processing systems and discuss
personalisation. The fourth topic is related to spatial issues and sensor placement. Finally, we
discuss the concept of deviation detection and show the novelty that CommonSens provides
concerning this issue.
2.3.1 Event Models
In this section, we first discuss general event definitions in order to give a common understanding of the concept. Second, we relate our work to event models used in automated home care.
The term event is hard to define, as the term is used in many different application domains
from philosophy to computer science. According to the Merriam-Webster dictionary, an event
can be an occurrence, i.e., “something that happens”, “a noteworthy happening”, “a social occasion or activity”, or “an adverse or damaging medical occurrence <a heart attack or other
cardiac event>”. The first definition is a common divisor for all the three remaining definitions,
since they all refer to something that happens. The only difference between the four definitions
15
is the strictness related to the event occurrence. If an event is considered as something that happens, everything can be an event. The second definition points to a possible restriction, since it
states that an event is only a noteworthy happening or occurrence. This means that only a subset
of everything that happens can be considered as events.
Events are defined differently within the field of computer science as well, and in this section
we focus our discussion on atomic and complex events. Throughout this thesis we use the term
event for both atomic and complex events. Luckham [Luc01] defines an event as follows: ”An
event is an object that is a record of an activity in a system”. According to Luckham, an event
has three aspects: form, significance and relativity. The form of any event is an object. The
object can be everything from a simple string or a tuple of attributes that tell where and when
the event occurred. The significance of an event relates to the activity. For instance, in an
ordering system every event related to new orders are significant. The relativity aspect denotes
how the event is related to other events. The relation can be temporal and causal. In addition,
two or more events can be aggregated. A version similar to Luckham’s definition is used by
Carlson [Car07]. Carlson states that “an event represents a particular type of action or change
that is of interest to the system, occurring either internally within the system or externally in the
environment with which the system interacts”. Luckham’s definition is restricted to a system,
whereas Carlson’s definition also includes the environment, which might be more than a system,
e.g. a home where things can happen. This fits very well with the event definition from Etzion
and Niblett [EN10], who include important considerations from both Luckham and Carlson:
“An event is an occurrence within a particular system or domain [...] The word event is also
used to mean a programming entity that represents such an occurrence in a computing system”.
An atomic event is an event that can not be divided into any other events. For instance,
Atrey [Atr09] states that an atomic event is “exactly one object having one or more attributes
[and which] is involved in exactly one activity over a period of time”. Since the atomic event
has temporal properties it can have two timestamps. The first timestamp denotes the point of
time when the event begins, and the second timestamp when the event ends. The atomic events
can occur concurrently or consecutively, and a set of atomic events that are consecutively and
concurrently related is called a complex event. For instance, the routine of making breakfast can
be considered as a complex event. When the monitored person opens the door to the kitchen,
opens and closes the fridge, prepares food, etc., these occurrences can be interpreted as temporally related atomic events. Complex events are described by many terms, for instance composite events [RGJ09] or compound events [Car07, AKJ06]. However, the common divisor is that
they contain related atomic events.
Events can be structured in hierarchies and abstractions. We use an example from a fabric
line. This example is inspired by the examples that Luckham uses. The lowest level of events
is related to the assembly of the product equipment, e.g. components in computer chips. The
next level is related to the completion of one chip; an event is created when a chip is assembled.
The following events at this level relate to moving the chips from the assembly line to packing
and shipping. The whole process of packing and shipping can also be considered as one single
higher level event.
Logical sensors introduce event hierarchies, since a logical sensor can be an abstraction
and aggregation of lower level sensor readings. A hierarchical approach is an abstraction that is
quite often investigated in complex event processing. For instance, Luckham and Frasca [LF98]
16
introduce event hierarchies as a way of handling different levels of events in their RAPIDE
complex event processing system [LKA+ 95]. An approach they investigate is detection of
higher level events in fabrication process management systems. For the multimedia application
domain, Atrey [Atr09] introduces a model for hierarchical ordering of multimedia events, where
a complex event can consist of several atomic events, and finally transient events.
In addition to atomic and complex events, there exist many other event types and structures.
For instance, Atrey includes an event type that is at a lower level than atomic events. These
events are called transient event and have only one timestamp. According to Atrey’s model, the
atomic event can consist of two to N transient events that happen consecutively. Consequentially, the atomic event still has two timestamps: The first timestamp is from the first transient
event and the other timestamp is from the last transient event. An interesting addition in Atrey’s
event model is the silent event, which is “the event which describes the absence of a meaningful
transient event”. This type of event is meant to cover the interval between two transient events.
Events can have many properties. For instance, they can occur in time and space. In addition
to temporal and spatial properties, Westermann and Jain [WJ07] introduce the informational,
experiential, structural, and causal properties, or aspects, of events in their event model E. The
informational properties of an event are information about the event. For instance, if an event is
that a person walks into a room, the informational properties about this person might be the person’s identity or how many times the person has walked into the room this day. This information
can be available from combining the data from the event with database records. Other information from events might be based on experience from earlier readings, so that the intervals
and spatial properties can be adapted. This also includes obtaining information from several
media to gain more information. This is denoted experiential properties. Structural properties
of events help deciding which level of abstraction we want to investigate, which is similar to
event hierarchies. Finally, complex events can have information about causal properties; that
events depend on other events in order to happen. For instance, if a light switch is located inside
a room, it can not be turned on before someone actually enters the room.
In their Event-Model-F [SFSS09], Scherp et al. use ontologies to obtain a common understanding about events that occur in a given domain. Instead of using queries to define
events beforehand, the events in the Event-Model-F happen related to a context, for instance
an emergency-response scenario. This scenario might have participating roles like officers in
the emergency control centre, firemen and police. In the Event-Model-F events are only entities that exist in time and not in space. Therefore, it is common to relate events to objects,
like a person opening a door. In the Event-Model-F, the identification of causality is important.
In the emergency-response scenario, an emergency control centre might get calls from people
complaining that the electricity is gone and other people complaining that there is water in their
cellars. When investigating the houses having water in the cellars, the forward liaison officers
observe that these events are caused by flooding. Since the complaints about the electricity
were registered almost at the same time, the officers in the emergency control centre can use
their event based system to reason about the cause of the electricity failure. One of the possible
causes is the flood having snapped a power cable. However, causality is hard to determine, and
the electricity failure might have been caused by other events as well.
In the discussion above, we show that events are used in many application domains and that
events are defined and interpreted differently. In the following discussion, we present event
17
models for automated home care. Even though the term event is used in several papers in this
application domain, there are few papers that present separate event models. The event model
is usually integrated in communication [RH09] and inherit many of the properties from the
publish/subscribe research domain. CommonSens is a centralised solution where one computer
has direct contact with all the sensors. Hence, we do not relate to publish/subscribe event
models, which are usually distributed and related to larger scenarios and application domains.
In their event-driven context model, Cao et al. [CTX09] define events as context switches.
The overall goal of their work is to close the semantic gap between environment context, which
can be sensor readings, and human context, which denotes the activities that the monitored
persons are part of. In our work we use sensor readings to reason about the activities of daily
living. Cao et al. define two types of events: Body events and environment events. The body
event lie down causes a shift from the state standing to laying. The environment events are
the results from the interaction between the user and the environment, e.g. an object is used
or household appliances are manipulated by the user. The semantic gap is closed by using an
ontology, and the classes in their ontology include and relate all the aspects of their automated
home care scenario. For instance, an activity has entities like cooking and watching tv, whereas
a sofa is a furniture object with properties like size and function. In addition, the activity has
properties like services, destination, and description syntax. The events are part of this ontology
and are reported from sensors as events when the transitions occur. By using the ontology, Cao
et al. integrate events in a different way than we do in CommonSens. In CommonSens, an
event is a state or state transition in which someone has declared interest, i.e., addressed in
one or more queries. However, since only context switches can be events according to Cao et
al., it is not possible to identify that the monitored person sits on the sofa as an event. In our
application domain it is important to detect such states, since, e.g. they can help locating the
monitored person or find out for how long the monitored person has been sitting in the sofa.
According to Cao et al., such information can be deducted from the switches that lead to the
context. If we want to know that the monitored person is sitting on the sofa, it is possible
to identify this context from the event sits down on the sofa. Cao et al. do not investigate
the concept of complex events, although they relate to concurrent and consecutive occurrence
of context switches. This is expressed through a first-order logic language that we discuss in
Section 2.3.2.
Park et al. [PBV+ 09] define events to be responses of sensors when the sensor readings
satisfy certain conditions, which is similar to our definition. A basic event does not contain
information about the sensor that produce the event. This allows the users to address events
like motion in the home without specifying which sensors they expect to obtain the data from.
On the other hand, an event includes information about the sensor that produced the event. An
episode is a series of events which are ordered by the production timestamp. This is similar to
our complex events, which are sets of atomic events. However, it seems that the events that Park
et al. define only contain one timestamp in contrast to our two. This implies that concurrency
might not be supported unless two sensors produce readings concurrently. Park et al. use a
statistical approach to obtain the events that form an episode. The conditions are simple: An
accelerometer sensor reports an event if the values exceed a pre-determined threshold. Park et
al. only refer to three types of basic events, which are caused by different types of accelerometer
readings; getting in and out of bed, moving objects, and moving around in the home. They do
18
not include any other sensors or types of events, and do not discuss the possibility of extending
the system with more events. By using only accelerometers, it might be hard to add any other
types of events as well.
Storf et al. [SBR09] also use an event model that is similar to ours. Their events are states
and state transitions, however it is not clear whether all states and state transitions are events or
not. Events can occur in sequences as well, and they use a general rule language to describe
the events. Furthermore, their events can be ordered in hierarchies where sensor events form
aggregated events. An example of an aggregated event is that the monitored person prepares a
meal, and this event is based on sensor readings from sensors that are defined to be related to
meal preparation, e.g. a sensor that reports that the fridge is opened.
All three event models for automated home care are defined informally, however, they show
that there are different interpretations of how events can be defined in our application domain.
To the best of our knowledge, there exist no other event models for automated home care that
matches our model, both in formality and generality.
2.3.2 Query Languages
Declarative query language are inspired by SQL and it is well accepted that a declarative approach is simpler than a procedural approach when it comes to describing complex events. For
instance, consecutiveness can be easily described by using an arrow between two events A and
B: (A -> B). This means that A should be followed by B. This approach is for instance used
in the Esper complex event processing library [esp]. In addition, a declarative query language
facilitates personalisation, since it helps removing technical issues like sensor selection from
the application programmer. In CommonSens, the queries never address sensors directly; only
the capabilities that the sensors provide are addressed. At a later stage in the life cycle of the
system, the real sensors and the queries are combined. This approach makes it possible to write
general queries which can be reused in several homes with different types of sensors.
Our query language is formally defined, and supports consecutiveness, concurrency, deviations, and abstractions. In our application domain, there are some other languages as well that
provide some similar properties, however concurrency and deviations are seldom discussed.
In order to describe context and context switches, Cao et al. [CTX09] use a first-order logic
based query language which includes the followed-by relation →. However they do not provide
any formal definition of the language and its syntax. In addition, it is not clearly stated how
they translate the location in their queries to the sensors, even though they claim to use their
ontology. Concurrency and deviations are not discussed, although they allow concurrent context
or events by using the ∧ operator. On the other hand, this might not be sufficient if they plan
to capture all classes of concurrency, e.g. that an event occurs during the occurrence of another
event. Since they do not support concurrency classes and have limited documentation of how
the language works, we conclude that our query language is better to express complex events.
In their personalised automated home care system, Wang and Turner [WT08] use a policy
based XML to define conditions. If a sensor reading matches the condition, a predefined action
is performed. The condition and action are part of a policy, which describes a given situation.
For instance, the policy prepare bath water applies to one given person in a home at one given
point of time, e.g. 20:00hr. When the time is 20:00hr, an action is defined to tell the home
19
care system to start heating bath water. The conditions are simple parameter-operator-value
triples, which is similar to the conditions in CommonSens queries. However, it is not clear
whether Wang and Turner manage to express complex events. In their examples, they use time
as condition, i.e., at certain points of time there are different events that should occur. This
approach is also similar to our, however, we explicitly state the possibilities for expressing both
consecutive and concurrent events, something which Wang and Turner do not define.
In their query language, Qiau et al. [QZWL07] use concepts from temporal logic to describe
events. Temporal logic can be used to state that an atomic event should occur during a defined
interval. Complex events can also be described using temporal logic, and Qiau et al. use the
followed by relation → to describe that two or more events should occur consecutively. On the
other hand, as with Cao et al., only the logical operator ∧ is used to describe concurrent events,
which is not sufficient. In addition, deviation detection is not included in their language, making
it hard to detect deviations properly.
Storf et al. [SBR09] use XML to describe activities, i.e., aggregated events. An activity can
consist of several events, and the relationship between these events can be defined. The relationship can be used to define temporal and consecutive properties of the events. The language
gives the application programmer the possibility to state that some events are not obligatory.
For instance, in order to prepare a meal, not all the events have to be fulfilled every time. Each
event, if not defined as obligatory, is defined with a weight value. Therefore, it is sufficient
that the sum of the weight values exceed a certain threshold in order for the aggregated event
to occur. This functionality is not supported in CommonSens; currently all the events have to
occur in order for the complex event to occur. On the other hand, it is not clear whether or not
their XML can express concurrency and deviations.
The language that is provided by the complex event processing library Esper [esp] contains
much functionality related to detecting complex events and performing aggregations over a
set of events. In CommonSens, aggregations are supported as well, and can be performed by
using a custom function in the logical sensors. Even though Esper claims to support queries for
complex events, it does not support any concurrency operators other than ∧. It does not support
deviation detection either.
2.3.3 Complex Event Processing and Personalisation
In order to fully benefit from the concept of events, it is required that there exist systems that
manage and process the events. There exist many types of such systems, ranging from those
that handle single atomic events, to complex event processing systems. In this section, first
we focus on the complex event processing (CEP) systems, but we also mention some of the
related systems. Second, we look at related work concerning personalisation. We discuss CEP
and personalisation in the same section, and in order to simplify the work of the application
programmer, it is important that the CEP systems used in automated home care support this.
However, as we show in the discussion, the combination of CEP, personalisation and automated
home care is novel and there exist no related work concerning this combination.
In computer science, events have traditionally been associated with event-handling by the
graphical user interface in modern operating systems. When the user moves, drags or clicks
the mouse, the operating system generates an event that contains information about the action.
20
Several running applications have installed event handlers, and the event information is sent to
those applications that are available. If the application has defined an event handler for this
event, the event handler is called and performs the corresponding operations. In the publish/subscribe system paradigm, the user can subscribe to certain events. For instance, in rescue
and emergency scenarios, the medical doctor can subscribe to events, e.g. the heart condition
related to certain patients [SSGP07].
CEP systems handle consecutive and concurrent data tuples. The data tuples contain information about something that has happened, e.g., that a key is pressed on the keyboard, or that
the monitored person has eaten breakfast while being in the kitchen. The origin of the data tuples, i.e., the data sources, can for example be sensors, stock tickers or computer hardware like a
mouse. The data tuples can either be pushed to the CEP system as a data stream, or the CEP system can be instructed to pull the data sources that are of interest. In some application domains it
is assumed that all the data tuples in the data stream are events. For instance, an event in a stock
trading system is a data tuple that informs about a company, the price of that company’s stock,
as well as a timestamp that tells when the trade has been performed. This approach is acceptable
for such domains, and is used in systems like Cayuga [DGH+ 06, DGP+ 07, WRGD07], SASE
[WDR06], and SASE+ [GADI08]. However, these systems only consider those data tuples that
match a set of rules. In other systems the amount of information can be considerable. This
applies to sensor based systems, since data streams from all the sensors might lead to a high
load. Therefore, it is much more resource efficient to let the CEP system pull the relevant data
sources instead.
In order to decide if a data source is relevant or not, the CEP system needs to work according to a set of queries, which tell what the CEP system should do when certain data tuples are
obtained. The data tuples might match a set of conditions, like values, or temporal and spatial properties. The queries are written by application programmers who work within a certain
application domain. This also means that the application programmers have domain knowledge and know what sort of conditions are required. The complex event processor and query
language provide personalisation, and our CEP system easily adapts to new sensor types and environments. Since it is hard to find related work concerning CEP, personalisation and automated
home care, we relate the personalisation issue to two systems that use a different approach than
CEP.
Challenges related to personalisation of home care systems are addressed by Wang and
Turner [WT08], where they use a policy-based system using event, condition, action (ECA)
rules where certain variables can be changed for each instance. They provide the possibility of
addressing sensor types instead of sensors directly, provided by a configuration manager that
finds the correct sensors. However, they do not provide separate models for the events, sensors
and the environment to show how one capability can be provided by several different types of
sensors in different environments. This also applies to Mao et al. [MJAC08], who decouple sensors and events, but do not relate these issues to the automated home care application domain.
As stated in the introduction of the section, there exist no other automated home care system
that combines the aspects of CEP and personalisation in a structured way as CommonSens does.
21
2.3.4 Spatial Issues
For spatial issues like detecting spatial events, CommonSens differs from related work since
it aims for a general and open approach that supports all possible sensors in a standard home.
The only important issue is that the sensors provide the capabilities that are addressed in the
queries and that they have a coverage area. By combining the readings from several sensors,
CommonSens can approximate the areas in the environment where the events should occur.
These areas in the environment are called locations of interest (LoIs) and are addressed in the
queries. CommonSens provides a separate environment model, which is used to solve these
spatial issues.
Related work mainly use specific technologies to locate events and objects in the environment. For instance, Koile et al. [KTD+ 03] use a multi-camera stereo-based tracking system to
track people in activity zones in indoor environments. The activity zones are similar to our concept of LoIs. However, they do not discuss utilisation of other types of sensors and abstractions
through capabilities. On the other hand, they investigate how the activity zones can be used to
trigger different types of activities in the environment. For instance, if a person is located in an
activity zone that is defined to be a quiet zone, the system can be set to not accept telephone
calls.
CommonSens supports multimodality through capabilities, which means that one capability
can be provided by many different types of sensors. Atrey et al. [AKJ06] propose a multimodal
framework that identifies whether or not complex events occur based on user defined thresholds
and probability based on earlier occurrence. An important issue that they address is the synchronisation of the events and sampling frequency. In our work, we also address this issue, but
we use a pull-based approach and assume that the data tuples we obtain are synchronised when
they are pulled. However, this assumption might not always work in real world scenarios. For
instance, we experienced (see Chapter 6) that there were some synchronisation issues related to
obtaining data tuples from cameras. On the other hand, Atrey et al. do not discuss the concepts
of late binding between sensors and capabilities since this is not the focus in their multimodal
approach. Sensor selection is investigated by Saini and Kankanhalli [SK09] where metrics like
confidentiality and cost related to processing time and memory requirements are considered. In
our work, we have not discussed issues related to sensor selection; we simply use sensors based
on the capabilities they provide.
Using other types of sensors than cameras and microphones for detecting spatial events has
also been investigated. Chen et al. [CCC+ 05] assist WiFi based positioning with RFID readings
to cope with environmental factors like doors, humidity and people. In sensor network research,
k-coverage problems consist of finding areas in the field that are covered by at least k sensors
[YYC06].
Coverage of sensors has been investigated in the Art Gallery Problem [O’R87], where a
minimum number of guards should cover every part of an environment. CommonSens provides
a general solution for detection of spatial events which is independent of the sensor types and
environment instance and binds together many of the already existing solutions while providing
an application programmer-friendly solution.
Even though there exist much work in the field of signal propagation in environments, e.g.
[SR92, BP00, LC07], not many works address the issue of how to make use of the sensors’ sig-
22
nal propagation for placement. Among some of their contributions, Huang and Tseng [HT03]
observe and discuss the issues related to the fact that sensor coverage ranges are not always
circular. This includes sensors that do not originally have circular coverage areas, like cameras,
or sensors whose coverage areas are reduced due to placement. Boukerche and Fei [BF07] also
focus on irregularities in coverage areas. These irregularities may be objects, as we have defined
them in our work. Their approach is to generate simple polygons, i.e., polygons whose sides do
not cross, from the physical sensor’s coverage area. If an object is within the coverage area, a
shape similar to the object’s shape is removed from the polygon. They do not discuss permeability, i.e., how an object affects different types of signals, and utilisation of signal propagation
models.
Permeability and signals are investigated in research on propagation modelling. For instance, in the field of user tracking based on signal strength from, e.g. WLAN base stations, an
established model is the one suggested by Seidel and Rappaport [SR92]. This model is called
the Wall Attenuation Factor (WAF) model, where the user specifies the number of walls and an
attenuation factor. This factor is derived empirically and is not pro-actively used to place sensors in the environment. In addition, the model leaves for the user to specify details like shape.
Factors like the depth of the wall might be important for how the signal strength is attenuated,
and there might be several types of walls with different depth and permeability values that the
signal passes. The model of Seidel and Rappaport is for example the fundamental model in Bahl
and Padmanabhan’s RADAR system [BP00] and the environment model presented by Xiang et
al. [XZH+ 05]. Another issue with these works is that they assume an already fixed position
of the base stations. We point out that objects in the environment affect the coverage of the
sensors. This means that proper environment models are imperative for sensor placement.
Using coordinates to describe objects and environments is widely used in applications like
computer-aided design (CAD). We have adapted this approach in our environment model and
added properties like permeability in order to support signal propagation and facilitate sensor
placement. In this section we focus on systems that combine sensors and environment models.
In their multimodal surveillance system architecture, Saini et al. [SKJ09] use an environment model that allows system porting between environments. Their environment model consists of geometric information, contextual information and sensor parametric information. The
geometric information is for instance realised through a 3D model, and the locations of objects
in the environment have semantic labels. For instance, a semantic label can state that the object is a glass door. The contextual information concerns dynamic qualities in the environment.
Saini et al. focus on the office space application domain, and give examples of information
related to office/working hours and prohibited regions. The sensor parametric information contains information about sensors, i.e., capabilities and limitations, as well as parameters like
location and coverage area.
The data is obtained through a data acquisition module, which organises the available sensors. Each type of sensor provides data level event detectors (DLED), which describe what
types of events the sensor can detect. For instance, a video DLED might have face detector
and motion detector. This approach is similar to our concept of capabilities. However, it is not
clear if Saini et al. use logical sensors to provide the data level events. In our work, video is
provided by a camera, which is a physical sensor. Face detection can only be provided by a
logical sensor since it needs both the video stream and the face detection description, e.g. Haar
23
classifiers [WF06].
It is not clear whether the environment model uses properties as permeability values or not
when sensors are added or removed from the environment. However, in their experiments,
Saini et al. run the same queries in two different environments. They show how the system can
reason about the possibilities of false positives given the number of sensors whose data tuples
are fused. This is similar to our approach to LoI approximation and query instantiation, i.e., the
late binding between the sensors and the queries.
2.3.5 Deviation Detection
The concepts of deviation detection and anomaly detection refer to approaches that identify unexpected patterns. A common approach is to use statistics to detect deviations. This approach is
used in fraud detection for credit cards, intrusion detection in networks, fault detection in safety
critical systems, detection of abnormal patient condition and disease outbreaks [CBK09, PP07].
CommonSens uses queries which describe complex events that we want to detect deviations
from. In contrast to CommonSens, rule-based systems identify deviations bottom-up. These
systems mine training data to obtain rules, i.e., patterns that do not match these rules are considered as anomalies or deviations [Agg05]. However, a training phase restricts the application
programmer from stating other rules than those that are mined from the training set. For instance, the monitored person may suffer from dementia. CommonSens notifications can be
used to guide the monitored person. In these scenarios it can be hard to initiate a successful
training phase, since the monitored person may be guided by another person. The presence of
another person will affect the sensor readings. It is hard to extract the sensor readings from the
training set that only relates to the monitored person.
CEP systems use declarative queries or a GUI to define rules that describe complex events
[SSS10]. This approach does not fully support the concept of deviation detection. For instance,
the commonly used CEP library Esper [esp] supports detection of complex events. However,
Esper and similar systems only return results if there exist matching events. This contradicts to
our concept of deviation detection since they do not report anything if the query is not matched.
For instance, it is not possible to prefix each atomic query with the logical operator NOT, since
this might violate the near real-time requirements of automated home care. In Esper, when the
data tuples do not match the condition in the query, i.e., a deviation, Esper will simply continue
the evaluation of the query. Only if all the atomic queries are not matched, Esper sends a
message that the query is matched, i.e., that a deviation has occurred.
2.4 Discussion and Conclusion
With the recent development of sensor technology, application domains like automated home
care can heighten the threshold for hospitalisation and for putting elderlies in retirement homes.
Unfortunately, sensors are not easy to program. In automated home care, there will be application programmers who are responsible for programming the sensors so that they can monitor the
persons correctly. In order to simplify the work of the application programmer, it is important
to find alternatives so that the application programmer does not have to manually program each
sensor. One possible approach is to use the concept of events and complex event processing.
24
Complex event processing allows the application programmer to write high level declarative
queries. The sensors themselves do not have to be programmed; instructing the sensors to send
data is done automatically by the complex event processing system. This simplifies the work
for the application programmer, and this is our overall goal.
This chapter introduces important concepts that our work builds on, and relates our work
to the other contributions of the field, especially sensors and concepts related to events. As
stated in Chapter 1, CommonSens has three main contributions: models for events, sensors, and
environments, and a query language, detection of spatial events, and deviation detection. For
sensors, we show that there exist an extensive model from which we extract some interesting
features and show that we extend related work by explicitly dividing into three different types
of sensors. With respect to events, the main focus is to show that there are several different
interpretations of events. Personalisation is an important focus in our work. However, there
is not much related work in automated home care concerning this issue. Hence, we relate the
personalisation to the discussion of complex event processing and conclude that there does not
yet exist any complex event processing system that provides personalisation in the automated
home care application domain. When we discuss the spatial issues, we present related works
that combine readings from several sensors in order to locate objects. Our environment model
is inspired by how coordinates are used to model 3D, and the model is used to perform proper
placement of sensors in the homes. Finally, we show how our interpretation of deviation detection differs from statistical approaches, since we simply use the complex queries as ground
truth for what we regard as correct.
Finding related work for our application domain is not a challenge, since there are several
research projects that focus on these issues. However, we have noted that there are many different approaches in this domain, and it is sometimes unclear whether or not a system actually
supports given functionality. Based on the related work study that we have made, it is certain that there exist no other automated home care systems that support all the aspects that are
covered by CommonSens.
25
Chapter 3
CommonSens Data Model
In this chapter we present the fundamental models in CommonSens. These models are defined
in order to identify relevant attributes from the elements in the application domain, i.e., the
environment and sensors. It will be easier for the application programmer to instantiate and
reuse environments, sensors and queries if there exist models that the concrete instances of
environments and sensors can be mapped to. In addition, events are not uniquely defined in
related work, and researchers give many different meanings to that term. Hence, we have to
explicitly define an event model that fits our requirements.
First, we define the three models that identify and describe the concepts and semantics of
events, the environment where events happen, and the sensors that are used to obtain information about the environment. Second, we define the semantics and the syntax of the query
language that is used to describe events.
3.1 Event Model
In our conceptual model of the real world, everything that happens can be modelled through
states and state transitions. For instance, when a door is opened, it transitions from a state with
the value ‘door closed’ to a state with the value ‘door open’. The current temperature is a single
state with a value. When the temperature changes, the current state transitions to another state
with another value. Based on this conceptual model of the real world, we define an event as:
Definition 3.1.1 An event e is a state or state transition in which someone has declared interest.
A state is a set of typed variables and data values. A state transition is when one or more of the
data values in a state change.
The type of variables to use depends on the instantiation of the system. Not all states and
state transitions in the real world are of interest. Therefore, we view events as subsets of these
states and state transitions. Figure 3.1 relates the core elements of the approach to our conceptual model of the real world. The application programmer uses declarative queries to describe
events. The sensors detect states in the real world, and if the state values match the conditions
in the queries, these are identified as events. State transitions become events if all the states that
are involved in the transition match a set of queries.
27
States and state transitions
Describe events
Events
Queries
Detect events
Detect states
Sensors
Figure 3.1: The relation of the core elements in our conceptual model of the real world.
Our event definition works well with existing event processing paradigms. For example, in
publish/subscribe systems, events can be seen as publications that someone has subscribed to.
To identify when and where an event occurs and can be detected, temporal and spatial
properties are important to specify. For addressing temporal properties we use timestamps,
which can be seen as discrete subsets of the continuous time domain and time intervals [PS06].
Definition 3.1.2 A timestamp t is an element in the continuous time domain T : t ∈ T . A time
interval τ ⊂ T is the time between two timestamps tb (begin) and te (end). τ has duration
δ = te − tb .
For instance, we want to ensure that the monitored person’s dinner has lasted for at least 20
minutes, i.e., δ ≥ 20 min.
To differ events from other states and state transitions, it is important to have knowledge
about the spatial properties of the events, i.e., where in the environment they happen. These
spatial properties are specified through locations of interest, denoted LoIs.
Definition 3.1.3 A location of interest (LoI) is a set of coordinates describing the boundaries
of an interesting location in the environment.
The LoI can be seen as a shape, which is defined in Definition 3.2.1.
In CommonSens we differ between atomic events and complex events, and an event that
cannot be further divided into lower level events is called an atomic event.
Definition 3.1.4 An atomic event eA consists of four attributes: eA = (e, loi, tb , te ). e is the
event, loi is the LoI where (|loi| = 1 ∨ loi = ∅) and (tb , te ∈ T ) ∨ (tb = te = ∅). If the
timestamps are used, they are ordered so that tb ≤ te .
loi, tb and te do not necessarily need to be instantiated. An atomic event without loi, tb and
te is, for example the state where a sensor that monitors a lamp returns a reading telling that
the lamp is turned on. If we want to address that one certain lamp in the home is turned on at a
certain time of the day, the temporal properties and LoI need to be instantiated.
In this section we do not define ways of describing the order and relations the atomic events
have. Here, we simply state that the atomic events in eC can happen concurrently or consecutively. To classify this we define five classes of concurrency and one class for consecutiveness.
28
Concurrent:
A
A
B
B
A equals B
A starts B
A
A
B
B
A finishes B
A during B
A
B
B overlaps A
Consecutive:
A
B
A before B
Figure 3.2: Concurrent and consecutive atomic events.
These classes are inspired by Carlson’s six interval relations [Car07] and Allen’s temporal intervals [All83] and are shown in Figure 3.2. The five concurrency classes are equals, starts,
finishes, during, and overlaps. The consecutive class is called before. In the figure, the interval
between the atomic event’s two timestamps is shown as an arrow, and we show pairs of atomic
events, which are called A and B. Concurrency is formally defined as:
Definition 3.1.5 Two atomic events eAi and eAj are concurrent iff ∃tu ,
(tu ≥ eAi .tb ) ∧ (tu ≥ eAj .tb ) ∧ (tu ≤ eAi .te ) ∧ (tu ≤ eAj .te )
For two atomic events to be concurrent there is a point in time, i.e., a timestamp, where
there is an overlap between both atomic events. Note that in the previous and the following
definitions we use dot notation to denote the attributes, i.e., eAi .tb refers to the tb attribute of
eAi . Two events are consecutive when they do not overlap.
Definition 3.1.6 Two atomic events eAi and eAj are consecutive iff eAi .te < eAj .tb .
A set of atomic events can be part of a complex event.
Definition 3.1.7 A complex event eC is a set of N atomic events: eC = {eA0 , . . . , eAN −1 }.
Since eC is a set, the atomic events can be unordered. However, the atomic events have
timestamps that state exactly when they occur. For instance, eA1 and eA2 both happen at 16:54hr,
i.e., eA1 .tb = 16:54hr∧eA1 .te = 16:54hr∧eA2 .tb = 16:54hr∧eA2 .te = 16:54hr. The
complex event eC1 contains the two atomic events. Hence, eC1 = {eA1 , eA2 }. It is unnecessary
to use operators or relations between the atomic events; eC implicitly contains all consecutive
29
All sequences of data tuples related to all queries
The sequences that are related
to one particular query
V
N E
The sequences that match the particular query
E
Figure 3.3: Overview of the V , N and E sets.
and concurrent relations between any two atomic events. However, when the application programmer wants to describe that the two atomic events are concurrent, he is required to use a
query language. The query language that uses the concepts of CommonSens is described in
Section 3.4.
Finally, an event might not occur when it is supposed to, i.e., when someone has declared
interest in it. In CommonSens, a non-occurrence of an event is called a deviation. Deviation
detection is the process of identifying the states and state transitions which are not matching
the queries, i.e., the process of identifying non-occurrence of events. For instance, it might be
expected that the monitored person takes his medicine between between 08:00hr and 09:00hr
every day. This complex event is stated through a query, and if the monitored person does
not take his medicine during the defined interval, this is interpreted as a deviation from the
complex event. The application programmer explicitly writes in the query that he is interested
in a notification about the deviation. He is given a notification if the complex event does not
occur. In the remaining text we use the term event for both complex events or an atomic events.
In the following we explain our interpretation of deviations, and how they differ from events.
We use terms from set theory to separate state values, events and deviations. This approach
defines state values, complex and atomic events and deviations from events as three related sets
E ⊆ N ⊆ V . The subset relation of the three sets is illustrated in Figure 3.3.
V is defined by all the sensors that are used in the queries and contains all state values that
might theoretically be read from the real world. This means that V contains all possible sequences of data tuples within the range of state variables the sensors can read. Consequentially,
V can be very large. However, the temporal properties in the queries limit the time interval that
defines V .
We want to identify the possible deviations from any given event that is described in a
complex query. Therefore, we need to identify a subset of V that is defined by each running
instantiated complex query. The set N, contains all possible sequences of state values that
might be read by the sensors that instantiate a given complex query. Since N contains all
possible sequences of state values there might exist sequences in N that do not match the query.
Even though N can be significantly smaller than V , N is still a large set. We want to define
a subset of N that contains all the sequences of data tuples that actually match a query. These
sequences belong to the set E. The deviation of an event depends on the N and E sets and is
30
defined as:
Definition 3.1.8 A deviation D from an event is a set of data tuple sequences D = N \ E.
D is the difference between the N and E sets. Deviation detection is an important addition to
the traditional event processing because it provides a simpler way to solve the challenges related
to detecting dangerous situations. In addition, deviation detection helps simplifying the work of
the application programmer. Instead of explicitly describing everything that can go wrong, the
application programmer only has to describe normal behaviour. If instructed to, CommonSens
automatically detects the deviations if the events do not occur, and sends a notification.
3.2 Environment Model
The environment, or home, is the physical space where the monitored person lives. CommonSens uses an environment model to identify how the sensor signals are affected when the sensors
are placed in the home. The goal is that all LoIs specified in the queries are covered by corresponding sensors. The environment is described by a configuration of objects. These objects
could be rooms, walls, furniture, etc. Once an object is defined, it can be reused in any other
instantiation of any other home. Objects have two core properties; their shape, and how they
impact different signals types (e.g. radio and light), i.e., their permeability.
Definition 3.2.1 A shape s is a set of coordinates:
s = {(x, y, z)0 , . . . , (x, y, z)N −1 }
The triplets in s describe the convex hull (boundary) of the shape. All triplet values are relative
to (x, y, z)0 , i.e., the base of the coordinate system.
While each object, e.g. a wall, has one shape, it can have several permeability values for
different signal types. For instance, a wall stops light signals while a radio signal might only be
partially reduced by the wall. Therefore, it is important to identify how permeable objects are
regarding different types of signals.
Definition 3.2.2 Permeability p is a tuple: p = (val, γ). val ∈ [−1, 1] is the value of the
permeability. γ denotes which signal type this permeability value is valid for.
The lower the value for p.val is, the lower the permeability is. If the permeability value is 0,
the signal does not pass. While if p.val has a negative value, the signal is reflected. The object
is defined as follows.
Definition 3.2.3 An object ξ is a tuple: ξ = (P, s). P = {p0 , . . . , pN −1 } is a set of permeability
tuples. s is the shape.
We use ξ.P to support that an object can have permeability values for many different signal
types. Finally, the environment is defined as an instance of a set of related objects.
31
Definition 3.2.4 An environment α is a set of objects: α = {ξ0 , . . . , ξN −1}. Every ξi ∈ α \ {ξ0}
is relative to ξ0 .
In the definition of the shape s we state that all triplet values in the shape ξi .s of an object
are relative to ξi .s.(x, y, z)0 . In an environment αy where ξi is located, ξi .s.(x, y, z)0 is relative
to ξ0 .s.(x, y, z)0 , which is set to (0,0,0). Since all the triplet values in the shapes are relative
to a base, only the base values have to be changed when the objects are reused in different
environments.
In the environment, it is possible for two objects to overlap or share the same space. This is
especially true in the case where objects are placed in a room, since the rooms are also objects.
In cases where these two objects have permeability values for the same type of signals, the
lowest value of the two will apply to both. For example, a table is located in a room, and the
table stops light signals. Therefore, regarding light signals, in the coordinates that are shared by
the table and the room, only the permeability values from the table apply.
Instances of the environment model can be reused when several objects are made of the same
material. A wall made of a given type of concrete can be defined once, i.e., the permeability
values for the various types of signals need to be set only once. When using the concrete wall
in an instance of an environment, only the physical dimensions need to be changed. In case
there are several similar apartments that need to be instantiated, the same wall object type can
be reused. If one defines permeability tuples for another signal type, this can be added to the
template wall. The instances inherit the new permeability tuples if new sensors are added.
3.3 Sensor Model
Traditionally, sensors read analogue signals from the environment and convert these into data
tuples. Hence, a data tuple is the information that a sensor has obtained about a state in the real
world. The data tuple is defined later in this section.
Our sensor model should achieve three objectives. First, when we have described the events,
CommonSens has to determine which type of sensors to use. Second, events might happen over
time, so we want the sensors to utilise historical and stored data together with recent data tuples.
Third, we want to aggregate data tuples from several sources of information. In order to meet
the first objective, each sensor should provide a set of capabilities.
Definition 3.3.1 A capability c is the type of state variables a sensor can observe. This is given
by a textual description: c = (description).
Capabilities like temperature reading or heart frequency reading return values of type integer. However, capabilities might be much more complex, like face recognition or fall detection.
To capture all these possibilities in our model we use a string to describe sensors such that the
description in a particular implementation can be anything from simple data types, such as integers, to XML and database schemes [GMUW08]. The application programmer should not
address particular sensors, rather sensor capabilities. To enable CommonSens to bind capabilities to correct sensors it is necessary to describe capabilities based on a well-defined vocabulary.
The capabilities play an important role for how the queries and the sensors are mapped.
32
Figure 3.4: Example of how a wall reduces the coverage area of a camera.
The data tuples that the sensors produce are used to send information about the reading the
sensor has performed.
Definition 3.3.2 A data tuple d =< φ, c, val, tb , te > consists of the sensor φ that produced the
data tuple, the capability c, and the value val of the capability. tb is the timestamp that denotes
the beginning of the event and te denotes the end time, where tb ≤ te .
For instance, a data tuple from a motion detector Mot_A23 that provides the capability MotionDetect and which reports true at 1:44.34pm could look like <Mot_A23,
MotionDetected, true, 1:44.34pm, 1:44.34pm>.
In order to meet the two last objectives, we define three distinct types of sensors. The
physical sensor is responsible for converting the analogue signals to data tuples. The external
source only provides stored data, and the logical sensor processes data tuples from all the three
sensor types and provides data tuples that are results from this processing.
Definition 3.3.3 A physical sensor φP is a tuple: φP = (cov, γ, f, C). cov denotes the coverage
of the physical sensor. The coverage of a physical sensor can either address specific objects or
an area. γ is the type of signal this sensor sends or receives. f is the maximal sampling
frequency, i.e., how many data tuples the physical sensor can produce every second. C =
{c0 , . . . , cN −1 } is the set of capabilities that the sensor provides.
The physical sensor is limited to only observe single states in the home. When the coverage
of a physical sensor covers an area, this is denoted coverage area. Usually, the producer of the
physical sensor defines the coverage area as it is in an environment without obstacles. When
the physical sensor is placed in the environment, the coverage area might be reduced due to
objects in the environment that have permeability tuples that match the signal type of the current
physical sensor. When a physical sensor covers the power line from a light switch, the capability
describes the physical sensor’s ability to tell whether or not the light is turned on. This coverage
is not an area, and is not affected by the permeability tuples in the environment.
An example of how an object in the environment reduces the coverage area of a sensor is
shown in Figure 3.4. Figure 3.4 a) shows the coverage area of a video camera as it is defined by
the producer. When the video camera is placed in the environment its coverage area is reduced
by e.g. a wall (see Figure 3.4 b)).
The external source φE includes data that is persistently stored. The main purpose of an
external source is to return data tuples from the storage.
33
PS: Accelerometer(value)
ES: User(personID)
LS: FallDetected(personID)
ES: FaceRecognition(personID)
LS: FallDetected(personID)
ES: FallDetected(true/false)
PS: Camera(matrix)
PS: TakingMedication(type)
PS: Camera(matrix)
ES: MedicationTaken(type)
LS: TakingMedication(type)
PS: PhysicalSensor, ES: ExternalSource, LS: LogicalSensor
Figure 3.5: Examples of capability hierarchies for detecting falls and taking medication.
Definition 3.3.4 The external source φE is an attribute: φE = (C). C is the set of the capabilities it provides.
In contrast to the physical sensor, the external source does not obtain readings directly from
the environment. Instead, it utilises historical and stored data, which later can be aggregated
with recent data tuples. This is practical for applications like face recognition, e.g. to identify
the monitored person when he is taking medicine. Thus, we do not include attributes like
coverage. For instance, we allow DBMSs and file systems to act as external sources, as long as
they provide data tuples that can be used by CommonSens.
The information that can be obtained from an external source can for instance be Haar
classifiers for object and face recognition [WF06]. The information from the external source
can be aggregated with images from a camera in order to detect objects or faces.
In order to perform aggregations between for instance historical and recent data tuples or
Haar classifiers and images, we finally define the logical sensor, which performs computations
on data tuples from other sensors.
Definition 3.3.5 The logical sensor is a tuple: φL = (Cd , agg L, Cp , f ). Cd is the set of all the
capabilities it depends on and agg L is a user defined function that aggregates the data tuples
from Cd . Cp is the set of capabilities it provides. f is defined as for physical sensors but depends
on agg L and Cd .
It should be noted that the definition of logical sensors does not specify specific sensors
to provide the input data tuples. Instead it specifies the capabilities required to provide the
input data tuples. These capabilities can in turn be capabilities of external sources, physical
sensors or logical sensors. This leads to a hierarchical dependency structure of capabilities with
arbitrary depth, which we call a vocabulary tree. The leaves in a vocabulary tree are capabilities
either provided by physical sensors or external sources. Each capability is a unique keyword
describing what a sensor is able to sense. As such the keywords bring semantics into the system
and the vocabulary tree can be seen as a simplified version of an ontology. Vocabulary trees are
our main mechanism to model logical sensors that detect complex events, like fall detection or
object recognition, which cannot be modelled by standard data types like integer.
Figure 3.5 shows an example of three simple vocabulary trees with capabilities for detecting
falls and that medication is taken. FallDetected is provided by a logical sensor that either
34
depends on the accelerometer capabilities together with the information about the monitored
person (personID), or the combination of the Camera capability and capabilities for face
recognition and fall detection. The available sensors define which of the two FallDetected
and TakingMedication configurations that are chosen. For instance, in one environment
there are already installed sensors that provide the capabilities Accelerometer and User.
This makes it more convenient for CommonSens to instantiate the FallDetected that depends on these two capabilities. The figure shows a possible instance with sensors that provide
the needed capabilities. The two capabilities Accelerometer and User are provided by
a physical sensor and an external source, respectively. TakingMedication is provided by
a physical sensor, which e.g., uses an RFID tag/reader pair to detect that the monitored person holds the medication, or a logical sensor that depends on the capabilities Camera and
MedicationTaken.
3.4 Query Based Event Language
This section defines the query language that the application developer can use to describe events.
The basic idea is that the application programmer formulates queries that describe complex
events in the environments. Our query language supports the reusability concept of CommonSens by letting the application programmer only address LoIs, capabilities and temporal properties. The application programmer can explicitly denote whether he is interested in the events or
the deviations from the events. First, we present the semantics of the query language. Second,
we present the syntax.
3.4.1 Semantics
The query language uses capabilities from the sensor model, and LoIs and timestamps from the
event model to support reuse. In order to detect an event, the application programmer assigns
conditions to the capabilities. The query that addresses one atomic event is called an atomic
query.
Definition 3.4.1 An atomic query qA is described by a tuple: qA = (cond, loi, tb , te , preg).
cond is a triplet (c, op, val), where c is the capability, op ∈ {=, =, <, ≤, >, ≥} is the operator,
and val is the expected value of the capability. If set, loi, tb , te and preg specify the spatial and
temporal properties.
Since a capability is a type of state variable, cond is used to describe a state. For instance,
in order to detect motion, which is a capability, the condition can be Motion = True. If the application programmer wants to detect motion in the kitchen, he sets loi = Kitchen. In addition,
the application programmer can set the temporal properties in order to specify when motion is
expected to be detected in the kitchen. The temporal properties are discussed below. When it
is needed to describe complex events, two or more atomic queries have to be used. These are
called complex queries.
35
Temperature > 19C
Temperature <= 19C
Time
08 09 10 11 12 13 14 15 16 17 18 19 20 21 22
Combination 1
Combination 2
Combination 3
Combination 4
Figure 3.6: Examples of allowed sequences in a P-registered query.
Definition 3.4.2 A complex query qC is a list of atomic queries, and logical operators and
relations ρi between them: qC = {qA0 ρ0 . . . ρN −2 qAN −1 }. If the complex query only consists
of one atomic query, ρ0 = ∅.
The logical operators in the query language are ∧, ∨ and ¬. If any other logical operator
is needed, these three operators can be used to define it. Complex queries describe complex
events and state transitions. In the following, the term query, denoted q, covers both atomic and
complex queries. An example of a complex event is that the monitored person is expected to
get out of bed in the morning, take medication and make breakfast. Taking medication consists
of standing close to the medication cupboard and take two different medicines. The breakfast
consists of opening the fridge, take out food, prepare food, eat and clean up within a certain
period of time.
In order to describe consecutive events like the order of the events while making breakfast,
we use one temporal relation, i.e., the followed by query relation (→).
Definition 3.4.3 The followed by relation → between two queries qz and qx :
{qz → qx ⇔ qz .te < qx .tb }.
With respect to the temporal properties of the events, the application programmer should be
able to write two types of queries. The first type should explicitly denote when an event should
happen, e.g. between 16:00h and 18:00h. The other type should denote how long an event lasts
when it is detected. These two types of queries are called timed queries and δ-timed queries. A
timed query has defined tb and te values, which means that the two timestamps denote exactly
the duration of the event. δ-timed queries define the duration of the events when they first start.
These are denoted by overloading the semantics and setting only tb . The timed or δ-timed query
can be percent-registered (P -registered) or not. This is defined in preg.
A P -registered query accepts (1) atomic events that last for minimum or maximum the
specified duration of the query, and (2) sequences of atomic events where the duration from the
last state value to the first state value equals the specified time interval or the duration, and where
the percentage of the atomic events that satisfy the condition matches the maximum or minimum
specified in the P -registration. preg is therefore a tuple m, val, where m = min ∨ max and
36
val is a percentage value between 0% and 100%. In order to explain P -registration we define
the data tuple sequence seq, which is defined as timely ordered list of data tuples.
Definition 3.4.4
seq = {d0 ≺ ... ≺ dn |∀di , di+1 , di .tb ≤ di+1 .tb }
A sequence may contain any pair of data tuples that share begin time and occur concurrently
or consecutively.
Concerning Point (2) of what a P -registered query accepts, CommonSens accepts all sequential combinations of events where the ratio and duration are correct. A δ-timed or timed
atomic query qA where the P -registration is not defined is equivalent to a similar query with
qA .preg = {min, 100%}.
We show the semantics of P -registration with a simple example. We assume that the environment consists of only one sensor that provides the capability Temperature. The sensor
covers the LoI kitchen. A query states that the temperature in the kitchen should be above
19◦ C at least 80% of the time between 08:00hr and 23:00hr.
To simplify the example, we assume that the sensor is pulled once every hour. This gives 15
temperature state readings in total. According to the query 12 temperature readings (80% of 15)
have to be above 19◦ C. Figure 3.6 shows a small set of the legal combinations of state values
that match the query. Combination 1 shows a consecutive sequence of accepts that match the
condition 12 times. Combination 2 and 3 shows accepted combinations of events, which still
satisfy the P-registration. Since the query states that minimum 80% of the events should match
the condition, Combination 4 is also accepted. In addition, a complex query can be timed or
δ-timed, and P -registered.
It is not required that an atomic or complex query is P -registered. If a timed or δ-timed
query is not P -registered, it means that all the data tuples that arrive during the evaluation have
to match the condition. If the atomic query is not timed, it can not be P -registered. This means
that it is sufficient that the condition is matched once for the evaluation to be successful. It is
impossible to get deviations from such atomic queries, because there are no temporal properties
in such atomic queries. On the other hand, this does not apply to complex queries that are not
timed or P -registered. When this happens, the temporal properties of the atomic queries apply.
In addition to using logical operators and the → relation to describe complex events, the
application programmer can use five concurrency operators to describe temporal dependency
between queries. These concurrency operators match the temporal intervals introduced by Allen
[All83], however the two intervals in Allen’s work that support consecutiveness (takes place
before and meets) are covered by the followed by relation.
Definition 3.4.5 The concurrency operators:
{E QUALS (qz , qx )
{S TARTS (qz , qx )
{F INISHES (qz , qx )
{D URING (qz , qx )
{OVERLAPS (qz , qx )
|
|
|
|
|
qz .tb
qz .tb
qz .tb
qz .tb
qz .tb
37
= qx .tb ∧ qz .te
= qx .tb ∧ qz .te
< qx .tb ∧ qz .te
> qx .tb ∧ qz .te
> qx .tb ∧ qz .te
= qx .te }
< qx .te }
= qx .te }
< qx .te }
> qx .te }
The five concurrency classes correspond to the classes shown in Figure 3.2, and are meant
to be used by the application programmer to describe concurrent events. For timed statements,
the application programmer explicitly writes the timestamps. If the application programmer
wants to state that one event should last 5 seconds longer than the other, we denote this by
setting te = 5 seconds. However, by using the concurrency classes, the application programmer
does not have to specify explicit timestamps. For instance, the application programmer wants
to state that the medication should be taken while the monitored person stands close to the
medicine cupboard, there is no need to specify the timing of the event. The system detects that
the person stands close to the cupboard by reading the relevant sensors. If the monitored person
starts taking medicine, CommonSens starts evaluating the complex query. If the application
programmer has specified that the deviations from this query are interesting, CommonSens
sends a notification if the monitored person brings his medicine from the medication cupboard
and takes them elsewhere. An example of a query that describes this complex event is shown in
the following section.
3.4.2 Syntax
The grammar of our query language is based on contributions made by already existing complex event processing systems. For instance we use the consecutiveness relation →, which for
instance is used for detecting consecutive patterns in the Esper CEP library [esp].
The grammar is shown in extended BNF (EBNF) in Figure 3.7. We begin with a bottom-up
presentation of the grammar, however we do not include definitions of strings and numbers.
Timing involves using timestamps to state how long events should occur. If only one timestamp is used, the query is δ-timed. If the query contains two timestamps, the query is timed
and the event should happen within a given interval. Finally, the P -registration is indicated by
max or min and a double that sets the percentage. The atomic events are described by atomic
queries, which are defined through Atomic. An atomic query always contains the capability,
one of the operators in Operator, and the expected value of the capability. In addition, the
atomic query can contain the LoI and temporal properties. The query language contains the
common logical operators, &&, ||, and !, and the consecutiveness relation ->. The symbols
in ConcOp match five concurrency classes based on Allen’s temporal classes [All83]. The
grammar for concurrency is defined in QConcurrent. It takes as arguments two chains of
queries.
A chain of queries, denoted Chain, contains Queries and possibly Timing. Queries
can contain one or more atomic or concurrent queries. The atomic queries are either related with
a relation or one or more logical operators. If the dev operator is applied to a chain it means
that the application programmer is only interested in notifications only if there are deviations
from the chain of queries. The root of the parsing tree is the symbol QComplex which denotes
the complex query. QComplex can consist of one or more chains, which are connected with
relations.
An example is complex query that describes that the monitored person Adam is supposed
to be located at the LoI medicine cupboard when taking the two medicines he is prescribed.
[dev(during([(TakingMedication == Med1) ->
(TakingMedication == Med2)],
38
QComplex
= ’[’ Chain ’]’ (Relation ’[’ Chain ’]’ )*;
Chain
= Queries (’,’ Timing)? | ’dev(’ Queries (’,’ Timing)? ’)’;
Queries
= (Atomic | QConcurrent) ((Relation | (LogicalOperator)+)
(Atomic | QConcurrent))*;
QConcurrent
= ConcOp ’(’ ’[’ Chain ’]’ ’,’ ’[’ Chain ’]’ ’)’;
ConcOp
= ’equals’ | ’starts’ | ’finishes’ | ’during’ | ’overlaps’;
Relation
= ’->’;
LogicalOperator
= ’&&’ | ’||’ | ’!’;
Atomic
= ’(’ Capability Operator Value ’)’ |
’(’ Capability Operator Value ’,’ LoI ’)’ |
’(’ Capability Operator Value ’,’ Timing ’)’;
’(’ Capability Operator Value ’,’ LoI ’,’ Timing ’)’;
Operator
= ’==’ | ’!=’ | ’<’ | ’>’ | ’<=’ | ’>=’;
Timing
= Timestamp | Timestamp ’,’ Timestamp |
Timestamp ’, (’max’ | ’min’) ’ ’ Double ’%’ |
Timestamp ’,’ Timestamp ’,’ (’max’ | ’min’) ’ ’ Double ’%’;
Figure 3.7: Our query language as written in EBNF.
39
[(DetectPerson == Adam, MedCupboard)]))]
When the monitored person starts taking Med1, the query evaluation starts, and both the
medicines should have been taken before Adam moves out of the LoI MedCupboard. If he
only takes Med1 and moves out of the LoI, CommonSens interprets this is a deviation since
dev is used in the query.
3.5 Discussion and Conclusion
In this chapter we define the three fundamental models in CommonSens; the event model, the
environment model and the sensor model. The event model shows our definition and interpretation of events. Our event definition limits events to states and state transitions in the environment
that someone has declared interest in, i.e., events are concrete states and state transitions that
occur in time and space. This means that the events can have duration. The spatial property of
an event is defined by a LoI, which is a set of coordinates. The atomic event can not be divided
into smaller events. The complex event contains a set of consecutive and concurrent events. A
deviation is the non-occurrence of an event, i.e., it occurs when the event does not occur.
In automated home care, events can only be detected by sensors, and these sensors are
affected by the environment they are in. The environment is a general description of the home
where the monitored person lives and where the events occur. The environment consists of
objects with shapes and permeability values. The permeability values are included in order
to define how the sensor signals are affected. In order to simplify to work of the application
programmer, the objects can be reused. This means that when one object is defined it can be
reused in several environments.
We want sensors to support three main features. The sensors should read states from the
environment, historical data, and be able to aggregate data from several sensors. This gives
three different types of sensors: (1) the physical sensor, which reads analogue states from the
environment and turn them into data tuples, (2) the external source, which returns stored data,
and (3) the logical sensor, which aggregates and processes data tuples from different sensors. In
addition, we want sensors only to provide capabilities, i.e., the state variables they can observe.
The capabilities a sensor provides and the sensors are loosely coupled. This means that one
sensor can provide several capabilities and one capability can be provided by several different
sensors. This allows the application programmer to write general queries that do not address
the sensors directly. Only during an instantiation phase, the sensors are connected with the
capabilities. This allows queries to be reused in many environments with many different types
of sensors.
The query language allows the application programmer to describe complex events. Instead
of addressing concrete instances, the queries only address capabilities and LoIs. In addition
they have temporal properties.
In the following chapter we show how the models are used. We show how the queries are
instantiated, i.e., how the general query is mapped to a given instance.
40
Chapter 4
Instantiation and Event Processing
In order to show the combination and interworking of the concepts in CommonSens, we have
created a simple work flow schema that shows the life cycle phases in CommonSens. This
chapter explains how CommonSens uses the data model concepts and the query language during
the three system life cycle phases (see Figure 4.1): (1) A priori, (2) event processing, and (3)
system shut down.
In the a priori phase, the environment is modelled with new objects, or objects are reused
from a repository. The sensor model is independent of a particular application instance and
describes the available sensors. Depending on the needs of the monitored person, the application
programmer writes queries or reuses queries from a repository. The system uses the capabilities
and LoIs to select the relevant sensor types during query instantiation. The results of this step
are instantiated queries that refer to particular sensors in the home instead of capabilities and
LoIs. Furthermore, the system supports the sensor placement process and calculates whether
sensors are appropriately placed to detect all events. When all the queries are instantiated, the
queries are turned into an event processing model that can be evaluated by CommonSens. The
event processing phase consists of evaluating data tuples from the sensors against the conditions
in the current atomic queries. The third phase in the life cycle is the system shut down, where
e.g. adjustments to the queries can be performed.
Environment model
Queries
Sensor model
Sensor placement
(1) A priori
Query instantiation
Event processing model creation
Data gathering
(2) Event processing
Evaluation
(3) System shut down
Figure 4.1: Life cycle phases and concepts of CommonSens.
41
Section 4.1 discusses how to place the sensors in order to detect spatial events, i.e., events
that occur in specific LoIs in the environment. In Section 4.2 we define the query instantiation.
Finally, the event processing model and event processing are discussed in Section 4.3.
4.1 Sensor Placement
In this section we present how CommonSens uses the environment model and the sensor model
to support sensor placement. Sensor placement is important for detection of spatial events, i.e.,
events that occur in the environment and inside the coverage area of the sensors. Sensors that
cover single objects, e.g. light switches, are not considered in this section. We first describe
how to use existing signal propagation models and describe an algorithm that calculates how
the signal is affected by the permeability values of objects. In addition we present how coverage
areas are modelled in CommonSens. Second, we discuss how to approximate the LoI by using
multiple sensors and how to calculate the probability of false positives.
4.1.1 Coverage Area Calculation
In order to calculate how a particular sensor installation is affected by the environment, we use
existing signal propagation models for various types of signals. Signal processing is a complex
issue which is out of scope for our work. On the other hand CommonSens is designed to accept
all types of signal propagation models. The only requirement is that the signal model returns the
signal strength when the initial signal strength, current permeability value, and distance from the
source are given as input. We assume that the signal strength decreases with the distance from
the source. When the signal strength reaches a certain threshold m we regard the signal as too
weak, i.e., to be sure that the signal is strong enough to guarantee a correct reading. However,
determining the reliability of the measurements made from the sensor is a signal processing
issue and out of the scope for our work. We assume that every state measured by the sensor is
correctly measured. Thus, the distance between the sensor and the coordinate where the signal
strength reaches m defines the real coverage range. We use the term range for the distance
between two coordinates. To exemplify how our system handles signal propagation we adapt a
simple model from Ding et al. [DLT+ 07].
Signal Model 1 The signal strength S at distance d from the sensor.
P0 if d < d0
S(P0 , β, d) =
P0
otherwise
( d )β
d0
d0 denotes the physical size of the sensor, and can be deducted by using the sensor model. P0
is the signal strength at distance 0. The exponent β is based on the permeability value of the
current object.
Since the permeability value is between -1 and 1 sometimes the permeability value has to be
converted so that it matches the signal model. For instance, for Signal Model 1, the β value is
based on the permeability value. However, since the β value is an exponent it can not be directly
used as a permeability value. For instance, β = 0 only gives P0 and not 0. Hence, we need
42
Object
Physical sensor
i1
i3
i2
a)
i4
i5
i6
b)
Figure 4.2: Signals that are sent through the objects in two directions and which create intervals.
additional functions that map the signal model to our concept of permeability. Signal Model 1
also needs to be overridden with additional logic when the permeability value of an object is 0.
Note that Signal Model 1 does not handle reflection, i.e., when the permeability values are
lower than 0. In addition, multipath fading issues are regarded as future work.
It is important to identify all objects that could reduce the coverage area of the sensor. This
includes all objects that are located between the physical sensor and the maximal range of the
signal, i.e., where the signal strength reaches m. From the environment model all objects with
their locations and permeability values are known. With this information, it is possible to divide
the range of the signal into a number of intervals. Each interval corresponds to the objects
the signal may pass through. The length of the interval is the distance between the coordinate
where the signal enters the object and the coordinate where the signal leaves the object. Figure
4.2 shows the separate intervals and objects. The signals that are sent from a physical sensor
pass through an object in many directions. In the figure this is illustrated by showing two
signals that go in two different directions from a physical sensor. Signal a) is perpendicular to
the object and signal b) is not. Therefore, the distance through the object for signal a) is shorter
than the distance for signal b), since the direction of signal b) is not perpendicular to the object.
Hence, the two signals generate two separate intervals through the object. These two intervals
are denoted i2 and i5 and have different lengths.
To calculate the signal propagation through objects, we apply the concepts from ray-tracing
and model the signal as a set of rays. A ray denotes the coordinates the signal covers when it
passes through an object until its signal strength reaches m. A coordinate is a triple (x, y, z) in
the environment. Hence, the set of coordinates in a ray defines the range of the signal.
Definition 4.1.1 A ray is a straight line from coordinate (x, y, z)B to (x, y, z)E that consists of
an ordered set of intervals:
ray((x, y, z)B , (x, y, z)E ) = {val0 , . . . , valN −1 }
val = ((x, y, z)b , (x, y, z)e , p) where the Euclidean distance between (x, y, z)b and (x, y, z)e
43
denotes the length of the interval. p denotes the permeability value. We set val0 .(x, y, z)b =
(x, y, z)B .
The Euclidean distance between two coordinates (x, y, z)i and (x, y, z)j is defined as a function
D IST , which takes as arguments the two coordinates.
D IST ((x, y, z)i , (x, y, z)j ) = (xi − xj )2 + (yi − yj )2 + (zi − zj )2
Henceforth, when referring to the Euclidean distance between two coordinates (x, y, z)i and
(x, y, z)j , we use the term distance or use D IST.
For each of the intervals a signal passes, the signal model is applied with the permeability
value of the object and the length of the interval. This makes it possible to have a simple
algorithm that calculates how much the signal strength is reduced. To begin with, P0 is set to
the initial signal strength and d equals the length of the interval. This gives the signal strength
when the signal leaves the sensor. For the next interval, P0 and d are set to the current signal
strength and interval length. This operation continues until the signal strength reaches a value
that is lower than m or that the signal reaches an object with permeability value 0.
The algorithm is defined in a function called R EDUCE, which takes as arguments a signal
type γex , the threshold value m, and the coordinate (x, y, z)φPex where the physical sensor φPex is
located. From the environment model we get all the objects between the sensor and the point
where the signal strength initially reaches m. From the sensor model it is possible to obtain
the current direction of the signal. The algorithm consists of a loop that adds new val tuples to
the ray and runs until P0 < m. The function returns a ray with the real coverage range. The
algorithm is presented in Figure 4.3. The algorithm uses many helping functions in order to
adapt to the current signal strength model.
Initially, R EDUCE uses a function M AX S TRENGTH, which takes as argument the signal type
and sets P0 to the maximum signal strength. For any coordinate in the environment the system
uses the environment model to identify which objects are located at that point. To do this the
system uses a function called O BJECT, which returns the current objects. This is done in Line
4.
In Lines 6 and 7 the coordinates where the signal enters and leaves the object are obtained
from the environment model by using two functions called G ET B and G ET E. In order to obtain
the permeability value of the object, a function G ET P ERMEABILITY returns the permeability
value for the signal type in the object. This is done in Line 8. In Line 10 the β value is set by a
function C OMPUTE B ETA, which maps the current permeability value to the β value that is used
in the function. In the next line, P0 is calculated by checking the current signal model. As input
the algorithm uses the previous P0 . While the signal strength is higher than or equal m, the new
val tuples are added to the ray. This is done by a function called A SSIGN. The new P0 value is
set by calling S(P0 , β, d) in Line 11.
In Line 12 the algorithm checks if the signal strength has been lower than m. If so, the
signal reaches m inside the object. In Line 13 the algorithm uses a function G ET R EMAIN ING D ISTANCE, which returns the distance from the edge of the object to the coordinate where
the signal strength reaches m. (x, y, z)e is set by calling a function C OORD that uses the environment model to find this coordinate. Finally, R EDUCE returns the ray. The real coverage range
equals the length of the ray and is given by D IST (ray.val0.(x, y, z)b , ray.valN −1 .(x, y, z)e ),
44
Require: γex , m, (x, y, z)φPex
1: P0 ← M AX S TRENGTH (γex )
2: (x, y, z)t ← (x, y, z)s
3: i ← 0
4: objtmp ← O BJECT
5: while P0 ≥ m do
6:
vali .(x, y, z)b ← G ET B(objtmp )
7:
vali .(x, y, z)e ← G ET E(objtmp )
8:
vali .p ← G ET P ERMEABILITY (objtmp .p, γex )
9:
d ← D IST (vali .(x, y, z)b , vali .(x, y, z)e )
10:
β ← C OMPUTE B ETA (vali .p)
11:
P0 ← S(P0 , β, d)
12:
if P0 < m then
13:
d = G ET R EMAINING D ISTANCE
14:
vali .(x, y, z)e ← C OORD (vali .(x, y, z)e , d)
15:
end if
16:
A SSIGN (ray((x, y, z)s), vali )
17:
i← i+1
18: end while
19: return ray((x, y, z)s)
Figure 4.3: The R EDUCE algorithm.
where |ray| = N.
An example is illustrated in Figure 4.4. Every object except the last denotes an interval,
and in the figure there are five objects; Object A to Object E. Object A, Object C
and Object E represent the open space of three rooms. These three objects all have the same
permeability values, p1. The two remaining objects, Object B and Object D represent
walls between those rooms and have permeability values p2 and p3, respectively. These permeability values are lower than p1. The signal strength starts at 1 and decreases in Object
A. Since p2 is lower than p1 the signal strength decreases more in Object B. In Object
C, p1 reduces the decrease. Towards Object E, the objects and intervals correspond. The
interval i5 starts in Object E, but has a shorter length than the object. This is because the
signal strength reaches m.
The coverage area of the physical sensor is modelled as a set of rays. The rays go in those
directions the sensor is sending its signal to, or respectively receiving signals from.
Definition 4.1.2 The coverage area cov is a set of rays:
⎧
⎪
ray((x, y, z)0, (x, y, z)1 ),
⎪
⎪
⎪
⎪
ray((x, y, z)1, (x, y, z)2 ),
⎨
cov =
...,
⎪
⎪
⎪ ray((x, y, z)N −2, (x, y, z)N −1 ),
⎪
⎪
⎩
ray((x, y, z)0, (x, y, z)N −1 )
⎫
⎪
⎪
⎪
⎪
⎪
⎬
⎪
⎪
⎪
⎪
⎪
⎭
The rays that originate from (x, y, z)0 denote the signals and they sample the coverage area. The
rays that do not originate from (x, y, z)0 connect the ends of two neighbouring signal rays to
45
Object A
Object B
Object C
Object E
Object D
1
Signal strength
p2
p1
p1
p3
p1
i4
i5
m
i1
i2
i3
Figure 4.4: Coverage range divided into several intervals with different permeability values.
a)
b)
c)
Figure 4.5: A model of a circle with a set of rays.
define the outer bounds of the coverage area. The result is an approximation of the real coverage
area and the higher the number of samples, i.e., rays, the more accurate is this approximation.
Note that Definition 4.1.2 allows coverage areas to be instantiated in both two and three
dimensions. Figure 4.5 shows an example of how a circle is modelled with a set of rays in a)
and where the outer bounds are defined in b). The circle it models is shown in c).
The real coverage area is found by applying the R EDUCE algorithm on the rays in cov that
are not the outer bounds. Figure 4.6 shows an illustration of how a reduced coverage area is
created. In a) the rays model a circle, in b) some of the rays are reduced by an object, and in c)
the final rays are added to define the outer range of the coverage area.
4.1.2 Sensor Placement
It is important that the LoIs are covered by sensors that provide the correct capabilities. Often
it is not possible to reduce the coverage areas of sensors so that they cover only the LoIs. If
there is a need for more accurate detection of whether an object is in the LoI, the readings from
several sensors need to be combined. This could be sensors that confirm that an object is in their
coverage area which overlaps with the LoI, or sensors that do not cover the LoI and confirm that
the object is not in their coverage area. A motion detector is an example of such a sensor. If
there is motion somewhere in the coverage area of the motion detector, the motion detector
46
a)
b)
c)
Figure 4.6: The rays are affected by an object and the coverage area is reduced.
LoIA
LoIA
Part of Isec
Part of NoIsec
b)
a)
Figure 4.7: Using physical sensors to approximate LoIA.
47
RFID reader
Eq.
RFID tag
Eq.
Coverage area
a)
Coverage area
Eq.
Eq.
Sensor
b)
Figure 4.8: Examples of relations between sensors that give equivalent results.
reports this. However, the motion detector can not report where in the coverage area the motion
is observed. This issue also applies to radio based sensors, for instance RFID readers. Figure
4.8 a) shows an example where an RFID reader with a coverage area and a passive RFID tag are
used. The RFID reader sends signals and responds to signals from RFID tags that are located
within its coverage area. However, the RFID reader can not determine the exact location of the
tag. From the RFID reader’s point of view, the three locations of the RFID tag are equivalent.
MICAz motes or similar physical sensors that use radio signals can create a similar issue. This is
illustrated in Figure 4.8 b). Even though the two sensors have different spatial relations, both are
within each other’s coverage area and can notice the presence of the other, but not the location.
During event detection this issue might cause the system to report false positives. In the worst
case, the monitored person can be located on the other side of the wall but mistakenly being
interpreted as inside the same room as the sensor. Therefore, it is important to approximate the
LoIs as precisely as possible.
In order to approximate the LoIs, we first combine the readings from physical sensors whose
coverage areas cover the LoI. This gives an intersection of the coverage areas, which covers the
LoI. Note that in the following definitions we use set-theoretical notation for the geometrical
concepts. The intersection is defined as:
Definition 4.1.3 For all φPi in the set of physical sensors, I SEC(loi) gives the intersection of all
the coverage areas that cover loi:
I SEC (loi) =
{φPi .cov|loi ⊆ φPi .cov}
48
However, there may be setups with physical sensors whose coverage areas intersect with
I SEC (loiy ) but not with loiy . If these physical sensors provide the correct capabilities and do
not report any readings while the ones that are part of I SEC (loiy ) do, the probability that the
event occurs inside the LoI is even higher.
Definition 4.1.4 The union of all the coverage areas that cover I SEC (loi) but not loi.
P
(φj .cov ∩ loi = ∅)∧
P
φj .cov P
N O I SEC (loi) =
(φj .cov ∩ I SEC (loi) = ∅)
The approximation of loi is therefore found using the following equation:
L O IA PPROX (loi) = I SEC (loi) \ N O I SEC (loi)
(4.1)
The equation uses the difference between the two sets to remove the part of the shape that I SEC
and N O I SEC have in common.
A false positive can happen when an event is recognised outside loiy but inside the area
given by L O IA PPROX (loiy ). In order to reduce the amount of false positives, it is important to
minimise the difference between the real LoI and the approximation.
The probability of a false positive can be found using the following equation:
FPP ROB (loi) = 1 −
A REA (loi)
A REA (L O IA PPROX (loi))
(4.2)
A REA is a function that calculates the size of a surface, i.e., the LoI and its approximation. A
perfect match between the LoI and the approximation is when FPP ROB (loi) = 0. Unfortunately, it is not possible to assume that there are enough sensors to perfectly match all LoIs.
An issue with the approximation and physical sensors is that in practice the sensor signals
sometimes may interfere. It is important that the system provides a synchronisation scheme that
controls when the physical sensors actually sample their coverage areas so that sensors whose
coverage areas intersect are not pulled concurrently.
During the life cycle phases of CommonSens, the needs of the monitored person might
change. This means that there will be new LoIs and a possible need for new capabilities. In
these situations it is important to know whether it is possible to utilise the existing setup of
physical sensors. The system investigates the current sensors, the capabilities they provide,
and their coverage areas. If sensors exist that provide the right capabilities and cover the new
LoI, the system informs about this. In addition, the system calculates the probability of false
positives.
If the current setup does not cover the new LoI and the capabilities, there is need for a new
configuration of the setup. This is done either by adding new physical sensors, or by changing
the location of the physical sensors that are already in the environment. In the latter case, this
affects the probability of false positives in the existing LoIs. In these situations the probability
of false positives has to be recalculated for all LoIs that are affected by changing the placement
of the physical sensors.
49
The prototype implementation of CommonSens provides a graphical user interface that supports manual placement of the physical sensors so that they cover the LoIs. FPP ROB is called
and works out the probability for false positives given the current approximation. Covering the
LoIs is an optimisation problem, where the cost associated with each sensor and its placement
has to be taken into consideration. However, optimising automatically the sensor placement is
considered future work.
For an atomic query, all the data tuples that arrive from the sensors in I SEC and N O I SEC are
relevant. Hence, in an instantiated atomic query all the sensors in the two sets have to be included. Each of the atomic queries in a complex query is instantiated, and the I SEC and N O I SEC
sets contain the sensors that approximate the LoI and that provide the correct capabilities. The
query instantiation is discussed in the following section.
4.2 Query Instantiation
The main purpose of the instantiation process is to select correct sensors based on the LoI
and capability in an atomic query. In other words, the instantiation process personalises the
complex queries by adapting to the environment, sensors and monitored person. We focus on
instantiation of atomic queries since combining instantiated atomic queries for complex queries
is straight forward.
The sensor model describes the capabilities for each sensor. Thus, a look-up in the set of
sensors for a given home results in those sensor types that provide the specified capability. If the
sensors are not already in the home, they can be obtained from a repository. In case a capability
is provided only by physical sensors or external sources these sensors can be used directly.
Logical sensors depend on input from other sensors and the required capabilities of the input
sensors are part of the logical sensor description. These capabilities can be used to identify
the corresponding sensor types. This process needs to be iterated until all necessary sensors are
identified. In this tree the logical sensor is the root and the physical sensors and external sources
are leaves. Intermediate nodes in this tree are also logical sensors. This tree describes the
hierarchical dependency between all sensors that are needed to provide the requested capability.
Simple examples of such trees are shown in Figure 3.5.
In general, the size of such a tree can be of arbitrary size. The procedure is defined in Figure
4.9 and uses a recursive function called F IND S ENSOR. The function takes as argument a set
C of capabilities. The first time the function is called, the capability is the one that is part of
an atomic query. For each capability in C, the function obtains one of the sensors that provide
this capability. This is done by a function called C HOOSE S ENSOR. We do not go into the
details about how this function works; but it uses availability and cost metrics to decide which
sensor to return. In the situations where the system needs to be extended with more queries,
C HOOSE S ENSOR investigates the current setup and returns sensors that are already placed in
the environment and approximate the LoI. For instance, it is appropriate to reuse a sensor in the
environment if it already provides the capability and is placed correctly with respect to the LoI.
In other cases it investigates cost and availability metrics. If several alternative sensors exist
that provide a given capability, a possible choice would be to choose the one that costs less.
However, more expensive sensors might provide even more capabilities, something which may
50
Require: C
1: for all ci ∈ C do
2:
φ ← C HOOSE S ENSOR (ci )
3:
if φ = φL then
4:
sensors ← sensors ∪ F IND S ENSOR (φ.Cd )
5:
else
6:
sensors ← sensors ∪ φ
7:
end if
8: end for
9: return sensors
Figure 4.9: The F IND S ENSOR algorithm.
facilitate reuse. In addition, the current queries have to be taken into consideration when trying
to choose sensors more optimally. The sensor returned by C HOOSE S ENSOR is mapped to the
general sensor φ. If the sensor is a logical sensor, the search continues. This is done in Line
4, where the capabilities the logical sensor depends on are used as arguments for a new call to
F IND S ENSOR. The function returns the set sensors, which contains all the physical sensors
and external sources for the capability.
In addition to letting the system choose physical sensors at system setup, the intention of the
vocabulary tree is to reduce the number of new sensors required when the monitored person’s
needs change. As stated in the previous section, if the system is extended with a query that
involves a new LoI and capability, there is a possibility that the current setup of sensors already
provides the capabilities that are needed at the new LoI. Based on the vocabulary tree, it is
possible to discover new usages of the current setup.
The instance of the hierarchy is defined globally. New sensors are added to the instance
with the capabilities they provide and possibly depend on. When the LoIs and the capabilities
are chosen for a certain home, the system consults with the instance of the global hierarchy to
find the correct sensors. In cases where a capability is provided by several alternative sensors,
issues like cost and availability can be taken into consideration as well.
As indicated when using F IND S ENSOR, there exist ways of optimising the search and choosing the sensors that are most appropriate for the current environment. However, this is considered future work.
For an atomic query all the data tuples that arrive from the sensors in I SEC and N O I SEC
are relevant. Hence, in an instantiated atomic query all the sensors in the two sets have to be
included.
Definition 4.2.1 An instantiated atomic query of an atomic query qA is a tuple:
IqA =
qA .cond, I SEC (qA .loi),
N O I SEC (qA .loi), qA .tb , qA .te , preg
All the atomic queries in a complex query are instantiated. During the event processing
phase the sensors in I SEC should produce data tuples that match the condition, and the sensors
in N O I SEC should produce data tuples that do not match the condition.
51
4.3 Event Processing
As shown in Figure 4.1, the last part of the a priori phase is the event processing model creation.
The a priori phase is followed by the event processing phase, and in this section we discuss the
concepts related to the event processing model creation and event processing.
There are many alternative ways of designing an event processing model from stream based
event processing systems. The event processing model should consist of an element that receives data tuples from sensors and evaluates these data tuples against one or more complex
queries that are evaluated concurrently. In order to do so, the event processing model should
use a query evaluator component. Since CommonSens should support many queries that are
evaluated concurrently, we need a query pool, i.e., a data structure that keeps track of all the
running queries. In our work we do not discuss optimisation of concurrent queries, like operator scheduling [CcR+ 03] or eddies [AH00]. Not all the sensors are interesting all the time.
If a sensor is not part of the I SEC or N O I SEC sets of an instantiated query, it is assumed that
the sensor does not provide any important data tuples. CommonSens needs a component that
only pulls data tuples from the sensors that are in the I SEC and N O I SEC sets of the instantiated
queries. This component is called the data tuple selector.
The query evaluator, query pool and data tuple selector components are discussed in the
following sections.
4.3.1 Query Evaluator
The query evaluator is the fundamental component in all query processing systems, for instance
DBMSs [GMUW08], DSMSs [CCD+ 03], and CEPs [Luc01]. In general, a query evaluator
receives a set of data tuples and a query, and returns whether the condition in the query was
matched or not. This basic functionality is adapted by our query evaluator. However, there are
two differences between the query processing in CommonSens and the other query processing
systems.
The first difference is that next to containing one condition, an atomic query has temporal
properties that have to be evaluated. The data tuple has to contain timestamps that explicitly
tell when the state value has been read. The timestamps of the data tuple have to be within the
duration specified in the atomic query. In addition, the atomic queries can be timed, δ-timed
and P -registered. This means that the query evaluator has to keep in memory the progress of
the evaluation of the atomic queries. Since the complex queries also can be timed, δ-timed,
and P -registered, the query evaluator needs to keep this information in memory as well. This
means that the query evaluator has to support stateful evaluation of the queries, e.g. counters
that investigate the P -registration. For instance, a query that corresponds to the P -registration
example in Section 3.4 is stated like this: [(Temperature > 19C, kitchen, 08:00,
23:00, min 80%)]. Between 08:00hr and 23:00hr, the temperature in the LoI kitchen
should be higher than 19◦ C. This should apply to 80% of the data tuples. Only when the value
of the counter matches 80% and the current time has not passed 23:00hr, the query evaluator
confirms that the atomic query is processed according to the conditions.
The second difference is related to how the query processor should respond to the result.
Since CommonSens should respond with e.g. alarms if the monitored person does not wake up
52
in the morning or if the monitored person falls and does not get up, the query evaluator should
start a user-defined action that depends on the query processing results. For instance, if there
is a deviation and the application programmer has explicitly asked CommonSens to be notified
about deviations, the query evaluator sends an alarm to the helping personnel or a notification to
the monitored person. This functionality corresponds to the event-condition-action (ECA) concept in other event driven architectures and active database systems [QZWL07]. The definition
of the notifications is left for future work.
4.3.2 Query Pool
CommonSens should support processing of several queries in parallel. This is a convenient
feature since there are many different events that can be important to know about in the home.
For instance, there can be queries that constantly check the temperature in the home, while
other queries investigate the well being of the monitored person. In addition, there can be
several monitored persons in the same home, even though we do not focus on this. In order to
solve this, the event processing model needs to support a pool of the current queries. It should
also be possible to remove and add queries to the query pool. Note that adding new queries
has to be performed during the system shut down phase, since this operation might require new
sensors, sensor placement, and query instantiation.
The query pool consists of the complex queries that are processed in the current environment. The instantiated complex queries are directly mapped to boxes and arrows. We have chosen to represent the instantiated queries as boxes and arrows since this is a simple and intuitive
paradigm. Currently, the atomic queries that are connected with logical operators are placed
in one box. For instance, the following atomic queries would be placed together in one box:
[(Temperature > 19C, kitchen, 08:00, 23:00) && (Light == ON , kitchen,
08:00, 23:00) || (DetectPeron == Adam, livingroom) 08:00, 23:00 10%].
A user-defined action is started if the temperature in the kitchen is above 19◦ C and the light
is either on in the kitchen or the monitored person is in the living room. In the query pool the
arrows in the boxes and arrows paradigm denote the followed-by relation. When the conditions
and temporal requirements in a box are matched, the consecutive box is evaluated. If the complex query uses one of the concurrency classes, there will be two chains of consecutive boxes.
These two chains are evaluated concurrently by the query evaluator.
4.3.3 Data Tuple Selector
There may be many sensors in the environment, and we assume that not all these sensors can
measure relevant states all the time. Only data tuples from the sensors in the I SEC and N O I SEC
sets of the current atomic queries are relevant. Therefore, CommonSens needs a component that
can pull only the relevant sensors. This is the third component in the event processing model
and is called the data tuple selector.
The data tuple selector obtains from the query pool the current atomic queries. First, the data
tuple selector investigates the I SEC and N O I SEC sets of each of the running queries. Second,
it pulls the sensors and receives the data tuples. Third, the data tuples are sent to the query
evaluator. Sometimes there are atomic queries that require data tuples from the same sensors.
53
Helping Personnel
Query Evaluator
ICQ1
Application Programmer
IAQ4
IAQ2
IAQ1
IAQ6
IAQ3
IAQ5
Data Tuple Selector
Sensors
Environment
Figure 4.10: Overview of the query processor in CommonSens.
For instance, one complex query investigates the temperature value in the kitchen, while another
query waits for the temperature in the kitchen to reach 15◦ C. The data tuple selector only pulls
the temperature sensor once and sends the data tuple to the query evaluator.
A simple example of the event processing model is shown in Figure 4.10. To show an
example of the components of the event processing model, the figure shows a snapshot of the
evaluation of an instantiated complex query IqC1 , which consists of the instantiated atomic
queries IqA1 to IqA6 . We only address the query names: IqC1 = Dev(IqA1 → during(IqA2 →
IqA4 , Iqa3 → IqA5 ) → IqA6 ). The application programmer wants CommonSens only to report
if there are any deviations from the complex query. Note that the illustration of the query is
simplified. A more detailed illustration is shown in Figure 5.4 in Section 5.1.3.
First IqA1 is evaluated. Then, two subsets of the complex query are evaluated concurrently:
IqA2 is followed by IqA4 . During the processing of these two queries, the atomic query IqA3
should be followed by IqA5 . After the concurrent queries are processed successfully, IqA6 is
processed. The snapshot shows the processing of IqA2 and IqA3 after IqA1 are successfully
evaluated.
4.4 Discussion and Conclusion
This chapter explains how CommonSens uses the data model to create an event processing
model that consists of instantiated complex queries that can be evaluated by a query processor.
CommonSens uses signal propagation models to help the application programmer to place the
sensors properly. This is done because the objects in the environment reduce the coverage area
of physical sensors. The amount of reduction depends on the permeability values in the objects.
An issue with using signal propagation models is that even small changes in the environment
affect the signal considerably. This issue especially applies to radio signals. Many state transactions, e.g. turning on the water, change the coverage area of sensors that use radio signals. In
54
addition, when the location of the objects in the environment changes, this affects the coverage
area. For instance, if the monitored person moves furniture or opens a door, the approximation
of the LoIs might change. It is important that the application programmer is aware of these possible pitfalls, and that signal propagation models and approximation of LoIs do not guarantee
correct reading. However, using signal propagation models and approximation of LoIs can help
the application programmer to be aware of possible sources of error and to avoid unforeseen
state values. This especially applies to signals that go through walls.
When the complex queries are correctly instantiated, they are mapped to a query pool in
the event processor. In this chapter we have only presented the concepts of the event processor through the event processing model. In the following chapter, we explain how the event
processing and other phases of the system life cycle are implemented.
55
Chapter 5
Implementation
This chapter describes the implementation of CommonSens. Through the implementation we
can realise the system life cycle phases and evaluate our three claims. We have chosen to implement CommonSens in Java since it facilitates fast development and is a common programming
language that is well understood by many developers. This simplifies further shared development of the code base. In addition, the data model concepts and query language are intuitively
mapped to the concepts that an object oriented programming language uses. For instance, the
three sensor types in the sensor model are implemented as subclasses of an abstract class called
Sensor. The implementation is a proof-of-concept prototype. This means that there are still
many features that are not implemented. Some of these features are addressed during the presentation of the implementation.
We have chosen to separate the implementation into six Java packages with dedicated responsibilities. The implementation is inspired by the three-tiered model-view-controller (MVC)
for separation of concern [Ree79]. However, there are some overlaps between the view and the
controller classes, and we have not used any MVC frameworks during the implementation.
Following is a list of all the packages:
• environment. Holds the classes that implement the properties from the environment
model.
• sensing. Holds the classes that relate to the sensor model.
• language. This package contains the classes that support the event processing language.
• modelViewController. This package contains the classes that are responsible for the
user interface, storing the data, and running the system.
• eventProcessor. The classes are responsible for the event processing phase.
The structure of this chapter follows the three system life cycle phases. Section 5.1 focuses on how the models are implemented as classes and how these classes are structured. The
functionality and the remaining packages, e.g., how the event processing is implemented, are
presented in Section 5.2.
57
lois
Environment
sensors
1..*
LocationOfInterest
sensing.Sensor
*
objects
1..*
CommonSensObject
Shape
boundary, boundaryReduced
Permeability
1..*
Triple
*
+permVal: double
2
from, to
Ray
+x: int
+y: int
+z: int
triple
1..*
CoordinateStrength
sensing.SignalType
+strength: double
Figure 5.1: Key classes in the environment package.
5.1 Overview
This section discusses and gives an overview of the implementation of the environment model,
the sensor model and the query language. We exclude the event model from the discussion since
states in the environment can only be observed by the sensors that are placed there. The states
become sensor readings, which are turned into data tuples and become events if and only if they
match a condition in an atomic query.
We have chosen to use UML to show the class hierarchy and the associations between the
classes that make up the implementation. To keep the diagrams simple, we have not included
all the attributes and methods. If two or more associations point to the same class type, we use
one association and give it several names, which are separated by comma.
5.1.1 Environment Model
The environment package includes all the classes that are required in order to instantiate an
environment. The classes in the package are shown in Figure 5.1. In order to correspond to the
models, the key class in the package is Environment. The environment consists of objects. In
addition, the environment contains LoIs. In order to implement this, the Environment class has
associations to objects of both the CommonSensObject and the LocationOfInterest classes. The
environment also contains sensors, and we have implemented this as an association sensors
that point to the sensors. The sensors are part of the sensing package and are discussed in
Section 5.1.2
The objects, as defined in Definition 3.2.3, are implemented in the class CommonSensObject. We have chosen this name in order to avoid confusion with objects in object oriented
programming. The CommonSensObject object has an association that points to one or more
objects of the classes Shape and Permeability. Permeability contains the permeability value and
an association to an object of the SignalType class. As with the sensors, SignalType is also in
the sensing package. LocationOfInterest is a simple class that only has one association to the
Shape class.
58
DataTuple
SensorDetected
*
java.util.concurrent.ConcurrentLinkedQueue
ExternalSource
<<interface>> CapabilityDescription
dataTupleQueue
environment.CommonSensObject
Sensor
providedCapabilities
Capability
1..*
dependedCapabilities
PhysicalSensor
LogicalSensor
+samplingFrequency: double
customFunction
<<interface>> CustomFunction
SignalType
environment.Shape
Figure 5.2: Key classes in the sensing package.
LoIs, the environment and the physical sensors all have shapes. Hence, these classes have
associations to the Shape class. In addition, although not explicitly mentioned in Chapter 3,
the coverage area of a physical sensor can be interpreted as a shape as well. According to
Definition 3.2.1, a shape is only a set of coordinates. However, when the shape is a coverage
area, it might be reduced by the objects in the environment. Therefore, we have chosen to
put all the logic that is related to shapes and calculation of shapes in the Shape class. This
includes the methods that calculate the reduced coverage area of physical sensors. This is
reflected through the two associations boundary and boundaryReduced, which point to a
set of Triple objects. The Triple class contains the x, y and z coordinates that define a spatial
coordinate. In addition, the implementation uses the GPC (General Polygon Clipper) library
[gpc] and presents the shapes as polygons. This additional library supports set operations on
polygons and simplifies the operations related to approximation of LoIs. The approximation
of LoIs is further discussed in Section 5.2.2. In addition, the objects made from the classes
in the GPC library can be easily mapped to java.awt.Polygon, which simplifies the process of
representing the environment graphically through the GUI. In order to keep the UML class
diagram in Figure 5.1 as simple as possible, the figure does not include any polygon references.
According to Definition 4.1.1, a ray consists of a set of coordinate-strength tuples. These
tuples are implemented through the class CoordinateStrength. However, in our implementation,
Ray objects are only used when calculating the real coverage ranges of the sensor coverage area.
When the real coverage ranges are calculated, they are mapped to the boundaryReduced set.
5.1.2 Sensor Model
The sensing package contains the classes that make up the implementation of the sensor model.
The key classes and their relations are shown in Figure 5.2. The most important class in the
59
sensing package is the Sensor class, which is a superclass for PhysicalSensor, LogicalSensor,
and ExternalSource. During the event processing, CommonSens always relates to the objects
of the three sensor types. The interaction with real sensors is always implemented behind the
interface either of the three classes provide.
The Sensor class will never be instantiated, hence the class is defined as abstract. Since all
the three sensor types provide a set of capabilities, the Sensor class points to one or more objects of the class Capability through the association providedCapabilities. Even though
capabilities play an important role in the instantiation of CommonSens queries, we have not unambiguously defined their properties. This is reflected through the implementation. Therefore,
the Capability class is associated with the CapabilityDescription interface. The current implementation only uses a simple textual description combined with a value range. For example,
CapabilityDescription can be instantiated by the class SensorDetected, which returns true or
false. SensorDetected can for instance be provided by a motion detector that reports true if it is
motion in the coverage area or false otherwise.
As noted in the previous section, the PhysicalSensor class uses the Shape class in order
to model the cov property if it is defined as a coverage area. In case the cov property of the
physical sensor is an object, the PhysicalSensor class is associated with a CommonSensObject
class. An alternative solution would have been to provide a class Cov, which has associations
with the two classes. The current design clearly tells a developer that the coverage is either
Shape or CommonSensObject. Currently, much of the logic related to cov is implemented in the
PhysicalSensor class and the Shape class. The PhysicalSensor class contains an association to
the SignalType class. The SignalType class is currently implemented as a String that describes
the signal type. The samplingF requency attribute is implemented as a double value in the
PhysicalSensor class.
According to Definition 3.3.4, the external source is only a set of capabilities. This is reflected through the ExternalSource class, which inherits the Capability association from the
Sensor class.
The logical sensor depends on a set of capabilities. The dependency is reflected through the
UML class diagram. The class LogicalSensor has an association dependedCapabilities
with one or more instances of the Capability class. According to Definition 3.3.5, the logical sensor uses a custom function in order to perform the aggregations. This is implemented
through an association customFunction that points to the CustomFunction interface. CustomFunction is an interface for all the custom functions that the logical sensors can use. If
needed, the custom functions should be simple to use and simple to change. The most appropriate way of implementing the custom function is to implement them as plug-ins. In Java, plug-in
functionality can be provided through dynamic class loading. The only requirement is to have
a compiled version of the custom function implementation. This .class file is referred to
as part of the configuration of the logical sensor. When the logical sensor is instantiated, the
.class file is dynamically loaded and is ready to be used. The dynamic loading is done during
the a priori phase.
In retrospect, we see that the dynamic class loading could have been used in a wider range.
For instance, capabilities and capability descriptions might change over time, and with dynamic
class loading, the changing would have been much more flexible. In addition, the implementing class could have been referred to in the sensor configuration. Currently, the classes that
60
sensing.Capability
Object
sensing.Sensor
environment.LocationOfInterest
capability
value
loi
*
isec, noIsec
AtomicQuery
OrOperator
begin, end
+operator: String
+isMin: boolean
+percentage: double
+fPProb: double
eventProcessor.Timestamp
AndOperator
ConcurrencyOperator
nextQueryElement
firstChain, secondChain
QueryElement
begin, end
theList
TimedListOfAtomicQueries
ConsecutiveRelation
NotOperator
Figure 5.3: Key classes in the language package.
implement the Capability interface have to be explicitly instantiated in the code. This solution
is inconvenient and makes CommonSens more static than necessary. An open problem is to
redesign the code so that dynamic class loading is used more extensively.
5.1.3 Query Language
The language package contains the classes that are processed by the event processor. The structure is very simple, and every class in the language is a subclass of the abstract QueryElement
class. The classes and their associations are shown in Figure 5.3. Given this structure, a complex query is a consecutive list of subclasses of QueryElement objects. This potential list-based
structure is shown through the self-association nextQueryElement of the QueryElement
class. Due to polymorphism, the event processor only has to point to QueryElement objects.
Only during event processing, the event processor identifies the subclass type and processes the
type thereafter. A possible solution would have been to use method overloading so that the event
processor does not have to investigate the subclass at all. However, each of the subclasses are
very different, and we need to use the instanceof comparison operator to identify the correct subclass and use the specific methods in that class. The consecutive objects of QueryElement form a list and this list corresponds to the nonterminal Queries in the syntax of the
query language. In this section we first discuss the AtomicQuery class. Second, we discuss the
TimedListOfAtomicQueries and ConcurrencyOperator classes. Third, we present the operator
classes and the relation class and show an example of an instance of a complex query.
The AtomicQuery class implements the atomic query as defined in Definition 3.4.1 and
in the syntax. It has an association loi to a LocationOfInterest class and an association
capability that points to the Capability class. Capability is used as part of the condition together with the Object class, which is pointed to via the association value. We use the general
Object class, since the value in a condition triple depends on the capability. The capability is
very general and it can range from single values like integers to XML specifications like Haar
61
classifiers for face recognition [WF06]. Therefore, the Java Object class is general enough to
model all the possible formats a capability value can have. The operator in the condition is a
String that can have one of the following values: ==, !=, <, >, <= or >=. We have not included
the range of possible values in the diagram. With respect to the condition triple, an alternative
design would have been to have a class called Condition which includes all the logic related to
the condition. Currently, this logic is located in the AtomicQuery class. The variables that are
used in the P -registration are located in the AtomicQuery class. The P -registration is explicitly
defined by the boolean variable isMin and the double variable percentage. If isMin is
true it means the P -registration of the current atomic query is min. Otherwise it is max.
An instantiated atomic query contains references to the sensors that approximate the LoI
through the I SEC and N O I SEC sets. The class AtomicQuery implements the approximation
through the associations isec and noIsec, which point to two sets of sensors. In addition,
the variable fPProb corresponds to the probability for false positives, as defined by FPP ROB.
Finally, the associations begin and end point to two Timestamp objects. The Timestamp
class belongs to the eventProcessor package, and holds a long value that denotes timing in
CommonSens. The eventProcessor package is described in Section 5.2.3.
The class TimedListOfAtomicQueries is both a QueryElement subclass and contains an
association theList that points to a list of QueryElement objects. It corresponds to the nonterminal Chain in the syntax of the query language. The purpose of the TimedListOfAtomicQueries class is to manage a list of queries during the event processing phase and manage the
temporal properties of the list, i.e., timing, δ-timing and P -registration. As with the AtomicQuery class, the class has two associations begin and end that point to Timestamp objects.
ConcurrencyOperator is a class that handles the concurrency and corresponds to the nonterminal ConcOp in the syntax. ConcOp takes two lists as arguments, and this is reflected in
the UML class diagram through the two associations firstChain and secondChain that
point to TimedListOfAtomicQueries. In addition, the ConcurrencyOperator class inherits the
association to the next QueryElement object.
The three operator classes OrOperator, AndOperator and NotOperator do not contain any
logic simply because they are used to identify the logical relation between any two atomic
queries only.
In order to show how the classes in the language package relate when they are parsed, we
use the example query IqC1 from Section 4.3.3 and Figure 4.10. Note that the figure does not
show the instantiation process where the I SEC and N O I SEC sets are used. It only shows the
parsed complex query. The parsed version of IqC1 is shown in Figure 5.4. The complex query
is managed by an object of TimedListOfAtomicQueries, which points to the AtomicQuery object IqA1 . IqA1 points to the ConsecutiveRelation object r1, which points to the ConcurrencyOperator object during. As explained above, the ConcurrencyOperator class refers to two
TimedListOfAtomicQueries objects. Each of these two objects refer to the atomic queries that
describe concurrent events. Finally, during points to the ConsecutiveRelation object r4, which
points to the atomic query IqA6 .
The alternative solution would have been to link the objects so that their configuration is
similar to the one in Figure 4.10. However, we need an object that monitors the processing of
the concurrent events and verifies that the concurrency is correct. This is most convenient when
the two concurrent TimedListOfAtomicQueries objects are handled by the ConsecutiveRela62
AtomicQuery:IqA2
TimedChainOfAtomicQueries:IqC1
ConsecutiveRelation:r2
TimedChainOfAtomicQueries:chain1
AtomicQuery:IqA1
AtomicQuery:IqA4
TimedChainOfAtomicQueries:chain2
ConcurrencyOperator:during
AtomicQuery:IqA3
ConsecutiveRelation:r1
ConsecutiveRelation:r4
AtomicQuery:IqA6
ConsecutiveRelation:r3
AtomicQuery:IqA5
Figure 5.4: The parsed version of IqC1 .
tion object. Hence, the event processing continues if the concurrent complex events occur as
specified.
The three packages environment, sensing and language contain classes that implement the
data model in CommonSens. In the following section, we show how these classes are used
during the a priori phase.
5.2 Functionality
This section discusses how the sensor placement, query instantiation and query processing
model creation are implemented, i.e., it discusses the implementation of the stages in the system life cycle phases (see Figure 4.1). Finally, the event processing phase is discussed. The
modelViewController package combines many of the features that CommonSens provides.
Therefore, it is important to present the classes that this package consists of. Note that this
section does not discuss all the algorithms and functionalities in CommonSens. This section
only discusses those algorithms and functionalities that are important for obtaining sufficient
information about how the system works.
5.2.1 System Control
The modelViewController package is responsible for the user interface, storing/retrieving data
from configuration files, and running CommonSens. In this section we discuss the most important classes: CommonSens, MainView, Core, QueryParser, SensorCreator, EnvironmentCreator, LoICreator, Environment, EnvironmentPanel, and MovementCreator. Figure 5.5 gives an
overview of these classes in the modelViewController package.
The main class is called CommonSens. The most important task of the CommonSens class
is to read the command line arguments and configuration files, as well as starting up the core
of the system, i.e., the GUI (view) and the controller. The class CommonSens can take a set of
command line arguments, which can be used to instruct the system to e.g. perform regression
63
CommonSens
QueryParser
queryParser
mainPanel
Core
MainView
core
physicalSensorCreator
SensorCreator
environmentCreator
EnvironmentCreator
loiCreator
drawPanel
currEnvironment
LoICreator
EnvironmentPanel
Environment
movementCreator
MovementCreator
Figure 5.5: Key classes in the modelViewController package.
Figure 5.6: Main window in CommonSens.
64
tests of the system or to run experiments. The command line arguments are associated with
experiments and regression tests, and are further discussed in Chapter 6.
The class CommonSens has an association mainPanel that points to one instance of the
class MainView. MainView is the GUI, and it is required that the GUI is simple to understand
and to use for the application programmer. This implies that the application programmer should
be given the possibility to open files that describe environments and sensors, and give the application programmer the possibility to create and edit environments and sensors, as well to open
and save queries. The application programmer should be able to place sensors in the instantiated
environment and to obtain information about the probability of false positives from the approximation of LoIs. Finally, the GUI should inform about the result from the query instantiation
and let the application programmer start the event processing phase. When necessary, the GUI
should let the application programmer stop the event processing and transfer CommonSens to
the system shut down phase.
The current MainView layout is shown in Figure 5.6. In the following we only present the
buttons that are related to the application programmer tasks. The current layout was designed
during the development of CommonSens. Hence, it contains several buttons related to experimentation. These buttons are discussed in Chapter 6. In order to design a user interface that
is more optimal for application programmers, i.e., easier to use, there is need for an extensive
study. Such a study needs to include a representative selection of application programmers who
give feedback about the layout through an iterative process with the GUI designers. This process of conducting such a study is too time consuming for a prototype, hence we consider this
iterative process as future work.
CommonSens has modes for simulations (‘Start Simulation’ button), step-by-step simulation (‘Step Simulation’), and real-world monitoring (‘Start Monitoring’). The step-by-step
simulation and full simulation modes are mostly used by the application programmer during
testing of a new environment. The real-world monitoring mode is set when the environment is
fully tested and when the monitoring should commence. Although they have different purposes,
the modes are implemented quite similarly.
MainView is associated with the Core class. When the application programmer has given an
instruction, Core is responsible for controlling the system so that it runs the correct operations,
i.e., that it provides the application programmer with sensor placement, instantiates the queries
and performs event processing.
Currently, Core only manages to use one environment. This means that one running instance
of CommonSens refers to one single home. This is currently sufficient. On the other hand, a
possible extension is to let the application programmer handle several instances at the same
time, for instance housing cooperatives with many apartments and monitored persons. The
environment is specified in a configuration file and can be opened by pushing the ‘Open Environment’ button. The ‘New Environment’ button initiates the creation of a new environment.
The environment is accessed through the class EnvironmentCreator. EnvironmentCreator uses a
GUI that shows the current environment. The GUI is instantiated through the EnvironmentPanel
class (see Figure 5.7). Currently, the application programmer is restricted to instantiate the environment in two dimensions. On the other hand, as explained in Section 5.1.1, the class Triple
contains the values x, y and z to define any spatial coordinate. A future extension of CommonSens uses three dimensions to instantiate the environment, which gives an even more realistic
65
Figure 5.7: Environment creator in CommonSens.
model of the environment. When the environment is fully created, the application programmer
can save the environment by pressing the ‘Save Environment’ button.
To show how the environment instance looks like for the application programmer, we have
chosen to display the environment that is used in [SGP10b]. The GUI that shows the environment allows the application programmer to create simple environments by adding objects
(‘Add Object’ button), sensors (‘Add Sensor’ button) and LoIs (‘Add LoI’ button). In addition,
the application programmer is allowed to change the size and rotate any objects in the environment. The button ‘Add Person’ allows the application programmer to add a monitored person
to the environment so that certain movement patterns can be investigated. ‘Toggle Coverage’
switches between the coverage area as it is specified by the producer of the physical sensors and
the reduced coverage areas. The figure shows the reduced coverage areas.
The application programmer can open queries with the ‘Open Query’ button. Figure 5.6
shows the complex query [(DetectPerson == Person1, LoI2, 10steps, min 50%)
-> (DetectPerson == Person1, LoI1, 10steps, min 50%)]. CommonSens does
not yet provide a lexical analyser [Bor79] like lex, so the queries have to be written in a very
simple form to be accepted by the query parser. The class QueryParser is discussed in Section
5.2.3.
Currently, the application programmer can create new objects and new physical sensors.
New objects can be created by pushing the ‘New Object’ button, which allows the application
programmer to set shape and add permeability tuples to the object. In order to create physical
sensors, the application programmer can push the ‘New Physical Sensor’ button. The application programmer can set the properties of the physical sensor, e.g. define the coverage. The
coverage area is simply defined by setting the angle. For instance, a circular coverage area
has an angle of 360◦. A camera might have a coverage area of 45◦ . Since the GUI only pro66
vides support to create physical sensors, both logical sensors and external sources have to be
defined manually through configuration files. Currently, more complex coverage areas have to
be defined manually in configuration files.
Finally, in order to investigate how the instantiation works with a monitored person, the application programmer is allowed to create movement patterns, i.e., patterns that show where the
monitored person moves inside the environment. The creation of movement patterns is handled
by the MovementCreator class. Movement patterns are easily created by letting the application
programmer decide which coordinates in the environment where he wants the monitored person
to be. When all the coordinates are defined, CommonSens automatically creates a movement
pattern. The movement pattern can also be defined manually in a configuration file.
CommonSens provides configuration files for the environments, sensors, queries, and movement patterns. The configuration files act as repositories that allow reuse of e.g. already defined
environment objects or sensors. The configuration files are written in a human readable format,
which allows the application programmer to manually adjust and edit them. Currently the formats are very simple; there is a tag that tells what type of value the configuration file parser
should expect. The current formats can be easily transformed into XML and parsed by an XML
parser. It is also possible to store the configuration files in a database if that is needed.
5.2.2 Physical Sensor Creation and Placement
CommonSens allows the application programmer to place sensors in the instantiated environment. This process helps the application programmer to investigate how the coverage areas of
the physical sensors are affected by the objects and how the LoIs are approximated. In addition,
when the sensors are chosen, the I SEC and N O I SEC sets are defined as well, which means that
the placement of sensors is a process that is part of the query instantiation. This section explains
how this functionality is implemented.
First, the sensors have to be created or retrieved from a sensor repository. Physical sensors
can be created by using the GUI or manually by setting their properties in a configuration file.
Logical sensors and external sources can only be created manually. Second, the application
programmer has to choose the number of sensors that should approximate the LoIs. If there
already exist sensors in the environment that approximate the LoI, these can be included as
well.
CommonSens calculates the FPP ROB values in two calculation steps. The first step calculates the FPP ROB value during the placement of sensors. The second step calculates the
FPP ROB value when the queries are instantiated. The first step is simpler than the second, since
it does not map capabilities to the coverage areas. The second step is related to the atomic
queries and maps the capabilities of the sensors with the LoI. We explain the reason for this two
step process in the following.
As noted above, the first step investigates the approximation of the LoI and ignores if the
physical sensors provide the correct capabilities. This step corresponds directly with the definition of L O IA PPROX. The capabilities are ignored since there is no link between a LoI and
the capabilities that the application programmer wants to detect. The link is only defined in an
atomic query, where both the cond triple and the LoI are used. Hence, during placement, the
link between the LoI and the capability has to be known implicitly by the application program67
mer. For instance, an environment can have a LoI called medicineCloset, which covers
the medicine closet in a home. The application programmer moves the sensors so that their
coverage areas approximate medicineCloset as described in L O IA PPROX. When the application programmer moves the mouse pointer over a LoI in the environment, CommonSens
automatically generates the FPP ROB value with the first calculation step. The calculation is
performed by a method calculateError() in the Environment class, which returns a double that indicates the FPP ROB value. The calculateError() method is discussed later in
this section. Note that the methods sometimes use arguments. To keep this presentation simple
we do not show the arguments in the methods.
The second calculation step maps the LoI with the capability in the atomic query. This calculation step is more accurate, and is performed automatically during the query instantiation.
Even though this does not correspond directly with the definition of L O IA PPROX, it has a more
practical value. Currently the second calculation step can not be used by the application programmer to manually approximate the LoIs. For instance, in the field ‘Current condition/state:’
in the GUI in Figure 5.6, the FPP ROB value is shown next to the I SEC and N O I SEC sets. This
value is obtained from the second calculation step. The second calculation step is performed
by the method calculateError(). The method finds the relevant sensors and populate the
I SEC and N O I SEC sets. In addition, it sets the fPProb value in the AtomicQuery object.
Even though the two calculations and methods take two different sets of arguments, they are
very similar. First, the methods traverse all the physical sensors in the environment and choose
those physical sensors that cover the LoI. The second calculation step also requires that the
chosen sensor provides the capability. All these physical sensors are placed in an I SEC set. The
method uses the polygon properties of the GPC library and calculates the intersection between
the coverage areas. Second, the method chooses the physical sensors that are in the N O I SEC set.
This choice is made by creating two temporary intersections. The first temporary intersection
is between the coverage area of the physical sensor and the coverage areas in the I SEC set. The
second temporary intersection is between the coverage area of the physical sensor and the LoI.
If the area of the first intersection is larger than 0 and the area of the second intersection is 0,
the physical sensor is placed in the N O I SEC set. When the I SEC and N O I SEC sets are defined,
the method runs an XOR operation on the intersections of the coverage areas in the I SEC set
and the intersections of the coverage areas in the N O I SEC set. The result of this operation is the
approximation. Finally, the FPP ROB value is calculated by subtracting the quotient of the area
of the LoI and the area of the final intersection from 1. If the LoI is not defined in the atomic
query, the method simply chooses all the sensors in the environment that provides the capability
and adds them to the I SEC set. This requires that all the sensors send data tuples that match the
condition. We have yet not defined the behaviour of CommonSens when the LoI is not specified,
and a consistent definition remains an open issue. The method calculateError() is shown
in Appendix A.1.
The issue with the current two-step calculation process is that the application programmer
can choose sensors that do not provide the correct capabilities. An alternative and more precise
approach for calculating the FPP ROB values is to let the application programmer choose which
atomic query he is interested in matching the approximation for. This approach lets the application programmer approximate the LoI for each atomic query, which excludes the possibility
that the application programmer can include sensors in the I SEC and N O I SEC sets that do not
68
Application Programmer
’Sensor is moved’
:PhysicalSensor
:EnvironmentPanel
:Ray
:Shape
getPolygonReduced()
getPolygonReduced()
<<create>>
reduceRay()
:Triple
:Polygon
:Polygon
’Shows new coverage area’
Loops through all triples in
the boundary set.
Figure 5.8: Classes involved in the calculation of reduced coverage area.
provide the correct capabilities.
During the sensor placement it is important for the application programmer to see how the
permeability values in the objects affect the coverage areas. In the following we show how
the calculation is implemented. As mentioned in Section 5.1.1, the Shape class contains the
associations, boundary and boundaryReduced, that point to two sets of Triple objects.
In order to define boundaryReduced, CommonSens uses a signal propagation model, e.g.
Signal Model 1. To move a physical sensor, the application programmer uses the GUI to point
at the coverage area of the physical sensor. CommonSens notices if the physical sensor has
been moved by the application programmer. If it has been moved, the calculation of the new
reduced coverage area starts.
The classes that are involved in the calculation of the reduced coverage area are shown
in the UML sequence diagram in Figure 5.8. The EnvironmentPanel object calls the method
getPolygonReduced() in the PhysicalSensor object, which calls a similar method in its
Shape object. The Shape object, which already has a set of Triple objects in boundary,
uses these Triple objects to define the reduced range for each ray in the boundaryReduced
set. Note that in Definition 4.1.1, there are two types of rays that form the coverage area.
The first type models the distance from the sensor to the edge. The second type builds the
boundary between the edges (see Definition 4.1.2). Since we use the Polygon class from the
GPC library, we are only required to model the coverage area in the first way. The Polygon class
automatically generates a polygon from the boundary points. The Ray class is just performing
the calculations using the R EDUCE algorithm on each ray (see Figure 4.3).
In fact, our implementation of the R EDUCE algorithm iterates through every coordinate
along the ray. For each point, this linear algorithm investigates the current object and permeability value. This is a simpler way than identifying intervals, however more time consuming.
If we wanted to correctly match the R EDUCE algorithm, the implemented method would have
needed to identify intervals instead of iterating.
When the signal strength value reaches a predefined value or the permeability value is 0, the
algorithm returns a new Triple object. This object is used to define the new reduced coverage
69
ray
a)
Coverage range
Starting point not changed
reduceRay()
Original triple object
b) Reduced coverage range
New Triple object
Figure 5.9: Before and after the reduceRay() method has been called.
range. This is illustrated in Figure 5.9. In a) the original Triple object defines the boundary
of the ray. The method reduceRay() returns the new Triple object, as shown in b). The
implementation of the method is shown in Appendix A.2.
5.2.3 Event Processing Model Creation
This section presents the implementation of the event processing model and the event processing model creation. The classes that are related to the event processing model belong to the
eventProcessor package. The event processing model creation is initiated when the application
programmer chooses to start simulation, step simulation or start monitoring. We have chosen to
let CommonSens wait with the event processing model creation until the application programmer starts simulation or monitoring since it is a complex process.
The idea with the creation of the boxes-and-arrows structure is that it provides a framework that extends the complex queries with the instance specific information. In addition, the
structure arranges the atomic queries so that they are simpler to evaluate and to process. This
especially applies to the logical operators, which combine a set of atomic queries that should
be evaluated concurrently. The event processing model creation is closely related to the second
calculation step that we described in the previous section. In this step the I SEC and N O I SEC
sets are populated based on the sensors that provide the correct capabilities and approximate
the LoI that is used in the atomic queries. As noted, the approximation of the LoIs is still not
automated; the application programmer has to manually choose sensors and place them in the
environment.
The key classes in the eventProcessor package and their relations are shown in the UML
class diagram in Figure 5.10. One or more objects of the class QueryPoolElement implement
one complex query in the query pool. QueryPoolElement has an association currBox that
points to a Box object, which is a superclass for the classes ConcurrencyBox and TimedListOfBoxes. In addition, the Box class has also associations to two other classes. The association
nextBoxTransition points to a Transition object, which points back to a new Box object.
The structure of associations makes it possible to create the boxes and arrows, where the arrows
are represented by the Transition objects. In the current implementation, the Transition class
does not have any other tasks than to point to the next Box object. Hence, an alternative solution
would have been to skip the Transition class and have an association transition between
the Box objects.
Box has an association elements, which points to zero or more ArrayList objects. Each
of these ArrayList objects points to a set of QueryElement class. We describe this structure
more thoroughly, since it is essential for how the current event processing is performed. The
70
1..*
java.util.ArrayList
language.QueryElement
*
elements
Transition
QueryPoolElement
nextBoxTransition
currBox
Box
+instantiateBoxes(currElem:QueryElement)()
andLists
AndChainCalculations
DataTuple
nextBox
*
*
java.util.concurrent.ConcurrentLinkedQueue
firstBoxList, secondBoxList
ConcurrencyBox
currBox
theBoxList
dataTupleQueue
TimedListOfBoxes
DataTupleFilter
*
MainDataTupleFilter
currentFilters
environment.Environment
Figure 5.10: Key classes in the eventProcessor package.
s1:Sensor
qA1:AtomicQuery
sN:Sensor
qA2:AtomicQuery
:java.util.ArrayList
qA3:AtomicQuery
qA4:AtomicQuery
:Box
:java.util.ArrayList
qA5:AtomicQuery
:java.util.ArrayList
qA6:AtomicQuery
Figure 5.11: Instantiation of a box with atomic queries.
71
idea with this structure is to provide simple support for the logical operators ∧ and ∨. The
current syntax allows the application programmer to write simple logical expressions like qA1 ∧
qA2 ∧ qA3 ∨ qA4 ∧ qA5 ∨ qA6 . The expression is true if one of the sets {qA1 , qA2 , qA3 } or {qA4 ,
qA5 } or {qA6 } is true. Each of the atomic queries that are related with the ∧ operator have to be
true, and we place these atomic queries together, as shown in Figure 5.11. The list of atomic
queries that are related with ∧ is called an ∧-list. In the current implementation, all atomic
queries in an ∧-list have to have similar timing and P -registration. For example, queries like
[(Temperature == 20, kitchen, 3hours) && (Temperature == 18, bedroom,
08:00hr, 09:00hr, 50%)] are not allowed. Currently, all the succeeding atomic queries,
i.e., those that are related with ∧, inherit the temporal attributes of the first atomic query. We
have chosen to do this because it simplifies the semantics of the ∧ operator.
Figure 5.11 also shows the pointers to the sensors that are part of the I SEC and N O I SEC sets.
Since the QueryElement classes are instantiated as AtomicQuery objects, we have used these
in the figure. Even though the UML diagram shows that the ArrayList objects point to a set of
QueryElement classes, the current implementation only points to AtomicQuery objects. However, future implementations might use other subclasses of QueryElement as well, for instance
to solve the following issue with the current structure. The general drawback of having such a
simple structure is that it does not allow complex logical structures. For instance, expressions
like (((qA1.0 ∨ qA1.1 ) ∧ (qA2.0 ∨ qA2.1 ) ∧ qA3 ) ∧ (qA4 ∨ qA5 ) are currently not supported. One
question that needs to be answered is how complex logic we should expect. Answering such
a question requires a more thorough requirement analysis. On the other hand, extending the
support for more complex logical expressions gives CommonSens a stronger expressiveness
than what CommonSens supports today, which is generally important. It might be sufficient to
actively use other subclasses of QueryElement, but at this point we consider solving this issue
as future work. The Box object has an association andLists that points to a set of AndChainCalculations objects. These objects are used to organise the event processing, i.e., to find out
which of the lists that are true. When one of the lists are true, the event processor can start the
transition to the next box.
As with the ConcurrencyOperator and TimedListOfAtomicQueries in the language package, the event processing model uses ConcurrencyBox and TimedListOfBoxes objects to tell the
event processor how to handle the complex queries. The objects are chosen during the instantiation, which is discussed later in this section. ConcurrencyBox points to two QueryPoolElement
objects; one for each list. This corresponds to how the class ConcurrencyOperator class in the
language package uses the two associations firstChain and secondChain to point to two
TimedListOfAtomicQueries objects. TimedListOfBoxes points to QueryPoolElement through
theBoxList.
The data tuple filter is an important structure in the event processing model since it chooses
the sensors that should be pulled. The data tuple filter consists of two classes: DataTupleFilter
and MainDataTupleFilter. We have divided the data tuple filter into two classes to meet future
extensions of CommonSens. The current implementation of CommonSens processes one query
at the time, i.e., when the application programmer opens a query from file, it means that CommonSens removes the old one and inserts the new. This functionality is implemented in the
Core class in the modelViewController package. Even though Core does not support many
queries, we have implemented a framework in the eventProcessor package that can support
72
several concurrent queries. MainDataTupleFilter has an association currentFilters that
points to a set of DataTupleFilter objects. One DataTupleFilter object is responsible for one
complex query. This responsibility is illustrated through the association currBox that points
to the current Box object, i.e., the current set of atomic queries that should be evaluated.
There is no communication between the complex queries. If there are two atomic queries
that need to pull the same sensor concurrently, there is no functionality in the Box class that
makes sure that the sensor is not pulled twice within a very short time interval. We assume
that concurrent pulls are unnecessary since they will provide the same value in the data tuple.
Hence, the main task of the MainDataTupleFilter is to reduce the number of sensor pulls. The
MainDataTupleFilter pulls the sensor once and distributes the data tuple to both the atomic
queries. This process is further discussed in Section 5.2.4.
The communication between the sensors and the data tuple filter is done through the current
environment, i.e., the Environment object. When the sensors are pulled, they return a data tuple.
The data tuple is added to the ConcurrentLinkedQueue object. The same object is accessed by
the data tuple filter and is removed from the queue and sent to the DataTupleFilter object which
sends it to the current atomic query.
The second part of this section shortly discusses how the classes in the eventProcessor
package are instantiated. We first discuss the creation of the query pool elements. The constructor of QueryPoolElement receives the complex query and returns the boxes-and-arrows
structure, i.e., the Box and Transition objects. This functionality is implemented in QueryPoolElement, which uses the method instantiateBoxes(). instantiateBoxes() is
recursive and creates the boxes-and-arrows structure by calling itself in new QueryPoolElement
objects. It iterates through the QueryElement objects in the parsed complex query and performs
operations that depend on the QueryElement subclass. In every new instance of QueryPoolElement object, instantiateBoxes() performs a set of if-tests. instantiateBoxes()
works as follows.
• TimedListOfBoxes objects initiate a new set of QueryPoolElement objects.
• AtomicQuery objects are added to the current list of atomic queries in the current Box
object. When the AtomicQuery objects are separated by ∧ operators, they are pushed into
the current ArrayList of atomic queries.
• When the QueryElement is an ∨ operator, the Box object creates new list of atomic
queries, i.e., a new ArrayList.
• If the QueryElement is instantiated as a ConsecutiveRelation object, new Transition and
Box objects are created.
• ConcurrencyBox objects initiate two lists of TimedListOfBoxes objects.
As we have shown above, the instantiation process consists of adding elements to the parsed
complex query. In practice, the parsed complex query is inserted into the Box objects and
its subclasses. This corresponds to Definition 4.2.1; the instantiated query inherits most of the
attributes from the query that is written by the application programmer. In the following section,
we show how the instantiated query is used in the event processing phase.
73
5.2.4 Event Processing
This section describes the event processing phase, i.e., the steps CommonSens takes in order
to evaluate an instantiated complex query. The event processing phase consists of pulling data
tuples from the sensors in the I SEC and N O I SEC sets. In addition, the event processing phase
consists of comparing the values in the data tuples with the conditions in the atomic queries.
The event processing phase begins when the application programmer chooses to start a
simulation or monitoring. The simulation mode is convenient when the application programmer
wants to test the system and run regression tests. For example, several of the experiments in
Chapter 6 are run in simulation mode. The main difference between simulation and monitoring
is the timing. A simulation uses virtual sensors which are pulled as fast as possible, which
means that the simulations are not real-time. Both the simulation and monitoring are initiated
by the method startEventProcessing() in Core. Core, which is the controller class,
handles most of the functionality related to simulations and monitoring. If the event processing
is started in monitoring mode, the monitoring is started as a separate thread. This makes it
possible for the application programmer to use the system during the event processing phase,
and to stop the monitoring thread when he wants to initiate the system shut-down phase. If
the event processing is started as a simulation, the whole process locks the system until the
simulation has finished.
For both simulation and monitoring, the event processing phase has to be set up. The setup
is done by calling the method setupEventProcessingPhase(), which is located in the
Core class. First, the method starts the timer. If the timer is told to start in simulation mode,
it sets the time to 0. An alternative would have been to let the application programmer choose
the timestamp he wants the system to start simulation at. In monitoring mode, CommonSens
uses the system clock to set the time to the current point of time. Second, the method starts
all the sensors. This process mainly consists of setting the timing and the shared queue. If
the Sensor object is connected to a real sensor, it wakes up the sensor and makes it ready to
send data. Currently, the functionality related to waking up the real sensors is implemented
in the CommonSens class, but future work consists of moving this functionality to the sensor
classes. We describe the process of waking up the real sensors as part of the evaluation in
Chapter 6. The third step in setupEventProcessingPhase() is to create and return the
MainDataTupleFilter object. Throughout the event processing phase, Core only interacts with
the MainDataTupleFilter object.
An overview of the event processing is shown in Figure 5.12. Note that there are several
details that are not included in the figure. We describe these details during the following presentation. The evaluation is performed by calling the method pullAndEvaluate() in the
MainDataTupleFilter class. First, the method calls the local method pullSensors(). It
goes through all the DataTupleFilter objects. Note that the current implementation accepts one
complex query, thus, it uses only one DataTupleFilter object. However, we refer to the DataTupleFilter objects in plural to make the presentation more general and more representative for
future extensions.
For each of the DataTupleFilter objects, pullAndEvaluate() calls the method getSensorsToPull(). The method is located in DataTupleFilter and calls the method getSensorsToPull() in the current Box object. The Box object iterates through all the atomic
74
:MainDataTupleFilter
pullAndEvaluate()
:Box
:DataTupleFilter
getSensorsToPull()
HashMap
:AtomicQuery
getSensorsToPull()
:Sensor
:ConcurrentLinkedQueue
getSensorsToPull()
HashMap
HashMap
pullThisSensor()
add(:DataTuple)
poll()
DataTuple
getSensorsToPull()
getSensorsToPull()
getSensorsToPull()
HashMap
HashMap
getBatch()
HashMap
evaluateBatch()
returnStatement:int
Figure 5.12: Overview of the event processing phase.
queries that are currently evaluated. These atomic queries are identified by the timestamps.
The two timestamps in timed atomic queries tell when the atomic query should be evaluated.
If the current time, either simulated or real time is within the interval of an atomic query, the
Box object extracts the sensor references in the I SEC and N O I SEC sets of the atomic query. δtimed atomic queries are different, but when the evaluation of the δ-timed atomic queries starts,
the timestamps in the atomic query are set. Since the tb and te attributes of the atomic query
are defined at once the evaluation of the δ-timed query starts, this process turns the δ-timed
atomic query into a timed atomic query. The evaluation of the atomic queries is discussed later
in this section. The sensor references are returned as hash maps. The hash map data type
removes redundant sensors and provides a simple way to avoid pulling the same sensor twice.
CommonSens uses a pull-based model, i.e., it obtains information from the sensors when they
are pulled.
The current implementation of CommonSens does not fully support variations in the sampling frequency. The current sampling frequency is statically defined and is used by all the
sensors. This approach simplifies much of the issues that comes from synchronising the evaluation of data tuples from sensors with different sampling frequencies. Future work consists of
investigating the sampling frequency issues further.
Each of the sensors that are referred to in the hash map are accessed through the Environment object. In order to keep Figure 5.12 simple, it only shows the direct connection to the
Sensor object. The sensor is pulled with the method pullThisSensor(). The sensor obtains the value, creates the data tuple object and adds the current timestamp. The resulting data
tuple is added to the shared queue. When all the sensors have been pulled, MainDataTupleFilter
polls the data tuples from the shared queue. We use the term poll, since this term corresponds
to the poll() method that obtains the data tuple that is in front of the shared queue.
An alternative to using the shared queue is to let the method pullThisSensor() return
the data tuple directly. This would have been sufficient in the current implementation. However,
we use the shared queue in order to handle sensors that do not produce data tuples immediately
75
when they are pulled. Simple sensors like temperature sensors report immediately when they
are pulled, but there might exist sensors that are more complex. For instance, a logical sensor
might need to perform complex computations on the data tuples they obtain. A logical sensor
that provides face recognition might need time to process the camera image, detect the face and
compare the face with features in a database. In order to avoid that this logical sensor forces the
pulling process to wait, it is more convenient to use a shared queue. The sensor adds the data
tuple to the shared queue when it is ready. The current implementation does not support that
certain data tuples can be delayed.
We assume that all the data tuples are added to the queue. If one of the sensors are delayed,
the data tuple will simply not show up in the current sensor pull. As described in the following
discussion, absent data tuple affects the evaluation of the related atomic query.
Figure 5.12 is simplified, and it appears that only the method pullAndEvaluate() in
MainDataTupleFilter performs all the computation related to the event processing. The truth is
that this process is performed by several methods in MainDataTupleFilter. However, to maintain
a simple overview of the event processing phase, we choose to keep the current illusion. Further
details are shown in the code.
When the data tuple is polled from the shared queue, it is important to know which atomic
query should evaluate the data tuple. Currently, the shared queue is polled until it is empty. The
data tuples are put into a hash map. For each of the data tuple filters, the sensors are identified
once more, which is a repetition of the previous calls. A more efficient solution is to keep the
overview about the box-sensor relationship from the first time this information is obtained.
The bottom part of Figure 5.12 shows the methods in the query evaluation component. If
the box contains many atomic queries, the box receives a batch of data tuples. MainDataTupleFilter sends the batch to DataTupleFilter, which sends it to the box by using the method
evaluateBatch(). evaluateBatch() iterates through all the atomic queries. This is
done with a double for-loop that first iterates through the ArrayList objects, and for each ArrayList object, it iterates through the AtomicQuery objects. Finally, all the data tuples in the
batch have to be evaluated with the current atomic query. If the data tuple comes from a sensor
in the I SEC set, it should match the condition in the atomic query. Otherwise, if the data tuple
comes from a sensor in the N O I SEC set, it should not match the condition in the atomic query.
If a batch of data tuples match the I SEC and do not match the NoIsec set, we say that the
condition in the atomic query is matched.
Our current algorithm gives evaluateBatch() a complexity of O(N 3 ). Such a complexity is very time consuming for large values of N, i.e., if the Box object contains a large
number of AtomicQuery objects and there is one data tuple for each atomic query. However,
as we show in Chapter 6, current system utilisation does not suffer from issues regarding the
complexity. On the other hand, O(N 3 ) is generally not good, and future work is to optimise
this algorithm.
The conditions in all the lists with atomic queries that are related with the ∧ operator have
to be matched. If one of the conditions is not matched, the algorithm jumps to the next set
of atomic queries. At once every condition in one of the lists is matched, the algorithm starts
updating the statistics for this list. The process involves setting the current list in evaluation
mode. If the atomic queries are δ-timed, they become timed: The tb attribute is set to contain
the current timestamp. te is set to contain the sum of tb and the δ-time duration. Since the
76
qA1
Mixed matching
qA2
qA1
Uniform matching
qA2
Figure 5.13: Mixed matching versus uniform matching.
P -registration is inherited by all the atomic queries in the ∧-list, the current implementation
does not allow mixed matching of the atomic queries. By mixed matching we mean that atomic
queries in an ∧-list are matched by two different sequences of data tuples. An example of
mixed matching and uniform matching of the complex query qA1 ∧ qA2 are shown in Figure
5.13. In order to realise mixed matching, the application programmer has to use the E QUALS
concurrency operator instead.
When the box has identified which list of atomic queries that have matched the condition
and updated the statistics, the data tuple filter continues to evaluate whether it should perform a
transition to the next box or not. The evaluation is performed by the method evaluateBox(),
which looks at each of the ∧-lists that are under evaluation. If one of the ∧-lists have met the
temporal requirements, the data tuple filter starts to send data tuples to the next box. If there are
no more boxes, the complex query is finished.
evaluateBox() also investigates if there are any ∧-lists that are timed, and where the
current time has exceeded the tb -timestamp. In this case, it updates the statistics and sets a
flag that informs that the evaluation of the ∧-list has started. Note that such an operation is
allowed since the atomic queries can be P -registered; they do not have to be matched by 100%
of the data tuples. In case the current time exceeds the value in te , evaluateBox() checks
if the P -registration is satisfied. If not, the evaluation of the complex query stops. If the application programmer has stated that he is interested in deviations from the complex query,
evaluateBox() sends a notification about the deviation.
5.3 Discussion and Conclusion
This chapter presents the prototype implementation of CommonSens. The implementation is
mainly made to show that the concepts from Chapters 3 and 4 can be implemented in a real
system. The structure of this chapter follows the life cycle phases of CommonSens, i.e., we
give an overview of how the models are implemented before showing the functionality and
placement and creation of the physical sensors. Finally, we discuss the elements that are part of
the implementation of the event processing.
During the presentation of the implementation we have pointed out that parts of the design
could have been different. However, the implementation process has been performed concurrently with the development of the concepts, as a proof-of-concept during the design phase.
Hence, this has also affected the final implementation. This means that there are many steps
that can be taken in order to optimise the performance of the code. On the other hand, in this
77
chapter we show that it is possible to implement the models and concepts of CommonSens. In
the following chapter, we evaluate our claims by using our implementation.
78
Chapter 6
Evaluation
This chapter evaluates the claims that we made in Chapter 1. We claim that CommonSens
(1) manages to detect complex events and deviations from complex events, (2) is scalable and
manages to detect complex events in near real-time, i.e., it processes as fast as possible, and
(3) simplifies the work for the application programmer and provides personalisation. We split
this chapter into three sections; one section per claim, and discuss for each section the approach
and method we have chosen in order to evaluate and support our claims. We end this chapter by
summing up the conclusions and discussing whether or not our claims are sufficiently evaluated.
In order to support the two first claims it is required that the query language can be used
to describe complex events and deviations correctly. This involves performing a systematic
evaluation of the query language constructs, e.g. ∧ and →, and to verify that they work as
specified, i.e., that they are correctly implemented. In order to do this it is important to be
able to choose the language constructs we want to evaluate. CommonSens takes as input an
environment, a complex query and a workload. The workload can either come from real sensors
or from emulated sensors. In order to evaluate only some language constructs at the time, we
have to customise the input. Customisation is easier to do by using synthetic environments,
complex queries and workloads, i.e., input that is not necessarily related to a real automated
home care scenario.
It is only appropriate to evaluate CommonSens in real scenarios when we know that the
language constructs work correctly. Therefore, in addition to using synthetic workload, we also
use real workload in our evaluation. The real workload is either obtained from sensors in realtime or trace files from related work. When the workload is obtained in real-time it means that
CommonSens should detect states and state transitions when they happen in the environment.
This takes more time than with synthetic workloads, since it includes the time it takes to pull
the sensors and to obtain the data tuples. However, real-time workload is not a requirement for
evaluation of the event processing.
CommonSens always performs the processing of the batches of data tuples in near real-time,
i.e., it processes them as fast as possible. This makes it possible to use synthetic workloads to
evaluate our second claim as well, i.e., that the complex events are detected in near real-time.
In addition, synthetic workloads and trace files make it possible to skip sequences where there
are no data tuples. This makes it possible to evaluate CommonSens faster. When the data tuples
are not obtained in real-time, this is called playback.
Table 6.1 shows the three combinations of real workload, synthetic workload, real-time and
79
Real workload
Synthetic workload
Real-time
Playback
Real-world: 6.1.2, 6.3
Trace file: 6.1.3
Simulation: 6.1.1, 6.2
Table 6.1: Workload types and sections they are used.
playback that we use in our evaluation. The table also shows in which sections the combinations
are used. For each table cell we present the term that we use for the combination and in which
sections this combination is used. Simulation is used in Sections 6.1.1 and 6.2. In this chapter,
simulation is the combination of playback and synthetic workload. Section 6.1.1 evaluates the
functionality of CommonSens, i.e., the detection of complex events and deviation. Section
6.2 evaluates the second claim, i.e., if CommonSens manages to detect complex events and
deviations in real-time. Playback og workload from real sensors is called trace file evaluation.
This is done in Section 6.1.3, where we evaluate if CommonSens manages to read trace files
from related work. Finally, the real-world evaluation is performed by using real sensors that
produce data tuples in near real-time. This workload is used in Sections 6.1.2 and 6.3.
There is one slot that we do not use in the table. This is the slot for real-time synthetic
workload. We conclude that real-time synthetic workloads do not add any more value to our
evaluation. This is because CommonSens already processes the data tuples as fast as it can, and
since real-time synthetic workloads do not include the time it takes to pull the sensors, it means
using this type of workload only takes unnecessary amount of time. We can obtain equivalent
results though simulation.
6.1 Detecting Complex Events and Deviations
CommonSens is based on complex event processing concepts, and detection of complex events
and deviations are two important concepts that CommonSens should support. The method we
use to evaluate these two concepts consists of designing and running complex queries with
different workloads and compare the results with the expected results.
The evaluation of our first claim consists of three separate parts. The first part systematically evaluates the language constructs that the language provides, for instance concurrency
and temporal properties. For the first part we use the synthetic workloads and synthetic environments which are designed to investigate and isolate single language constructs. The details
are explained thoroughly in the corresponding section.
The second part of our evaluation uses real-time generated workloads to show that CommonSens manages to detect complex events using real sensors. The second part also evaluates
the LoI approximation. The LoI approximation depends on correct instantiation of the atomic
queries. This includes both atomic queries that contain a LoI and atomic queries that do not
contain a LoI. It is very important that the LoIs are approximated correctly, i.e., that the I SEC
and N O I SEC sets contain the correct sensors. If the approximation is not correct, CommonSens might report wrong results. Unlike the first part, we do not aim to isolate the language
constructs.
The third part uses real-world trace files from related work. It is important to show that
CommonSens can detect complex events and deviations from other sources than the ones that
80
we provide. In addition to acid-testing CommonSens and evaluate if it can use different types of
sensors, reading trace files might be useful when comparing CommonSens with other automated
home care systems. To the best of our knowledge there does yet not exist any benchmark for
automated home care and CEP systems, e.g. similar to Linear Road for DSMSs [ACG+ 04].
In order to detect complex events and deviations, CommonSens depends on a set of language
constructs. We have identified six language constructs that have to be evaluated. These language
constructs are part of the query language.
I Timed and δ-timed atomic and complex queries. The temporal properties of atomic
and complex queries have to be correct. This also applies to queries that are not timed,
i.e., they should be successfully matched only once.
II P -registration. P -registration should work with both max and min. P -registration plays
an important role in the event processing, since it indirectly supports unreliable sensors.
One of our assumptions is that the sensors are reliable and produce correct state values.
However, with the P -registration one can state that only a given percentage of the state
values have to be correct.
III The logical operators ∧, ∨ and ¬. The logical operators are used between the atomic
queries. The current implementation of CommonSens does not support all types of logical
expressions. However, it is important to show that the ∧-lists work as expected.
IV The consecutiveness relation →. It should be possible to detect consecutive atomic and
complex events.
V Concurrency. Currently, we have only implemented the concurrency classes D URING
and E QUALS. However, these classes should work as defined.
6.1.1 Functionality Tests
Throughout the development process of CommonSens we have designed a set of regression
tests. These regression tests are used to verify that the language constructs in the language
are implemented correctly. In addition, the regression tests have been used to verify that new
functionality has not affected already working code. In general, regression tests are designed
and implemented with backwards compatibility in mind [Dus02]. However, the overall goal in
this section is that the regression tests can show that CommonSens detects complex events and
deviations from complex queries. The regression tests have a very simple design that allows us
to define the input to CommonSens, while knowing the expected output. Since the regression
tests evaluate the functionality as well as backwards compatibility, we refer to regression tests
as functionality tests throughout this chapter.
Test Overview
The workloads should either (1) match the complex queries or (2) not match the complex
queries. If the workload matches the complex queries, it means that the conditions in a sufficient
number of the atomic queries are matched, i.e., the workload is a sequence of data tuples with
81
Value
0
3
6
7
8
Meaning
The evaluation of the complex query has finished correctly.
A deviation is caused by temporal mismatch in one atomic query.
The evaluation of the complex query has not started.
A deviation is caused by mismatch in the concurrency.
A deviation is caused by temporal mismatch in a list of atomic queries.
Table 6.2: Return values from the functionality tests and their meaning.
which the complex query finishes successfully. Since CommonSens supports deviation detection, we must use workloads that do not match the complex query as well to see if CommonSens
successfully manages to detect deviations. An additional workload, which corresponds to (2) is
the workload that does not start the event processing. These workloads correspond to the large
number of data tuples that the data tuple filter has to send to the Box objects but do not qualify
for further evaluation.
In CommonSens, the functionality tests are fully automated. We have chosen this approach
since a large number of tests can be designed and run when they are needed. Automation also
removes the time it takes to prepare the tests, as well as human errors. We only need to instruct
CommonSens to start the functionality tests and wait for the results.
In addition to isolating the language constructs and evaluating them it is important to evaluate complex queries with several language constructs. For instance, we need to evaluate complex queries that consist of atomic queries with different timing, which are related with both ∧,
∨ and →. On the other hand, some language constructs, like timing and P -registration always
have to be evaluated together, since P -registration depends on the current timing. However, it
is hard to combine all the language constructs and sizes of complex queries. It is not possible
to evaluate all possible combinations, so we have to make a selection.
In the following we give an overview of how the functionality tests are designed. The input to our functionality tests is a triple of parameters that consists of an environment instance,
a movement pattern and a complex query. In order to evaluate the language constructs, we
combine different environment instances, movement patterns and complex queries. By applying different combinations, we show that the language constructs work correctly with different
input.
When one functionality test has finished, it returns a value that reports the outcome of the
test. The values and their meanings are presented in Table 6.2. The table shows the values
0, 3, 6, 7 and 8. These values belong to a set of public constants that are used by the data
tuple filter. In total, the data tuple filter uses 12 values that inform about the current complex
query, e.g. telling the data tuple filter to start sending data tuples to the next Box object. A
drawback with the way we have used the return values of the functionality tests is that they can
not specify which atomic query caused a deviation. The functionality tests simply return the
value 3 when a temporal mismatch is detected. However, the tests are designed in a way that
we know the atomic query that will return a deviation: If a deviation occurs, the test returns
a number that shows in which step of the simulation the deviation occurred. In addition, if
there is a mismatch between the result and the expected result, this has to be investigated more
thoroughly. All the functionality tests are described in a file, i.e., all the ‘environment instance’‘movement pattern’-‘query’-triples are written down together with the expected return value.
82
S4
S1
S1, S2, S3, S4
S5
S3
S6
a) e1.env
S1, S2
S3, S4
S2
b) e2.env
c) e3.env
Figure 6.1: Environment instances used in functionality tests.
The functionality tests are extensible, i.e., we can write additional information in these files if
this is required or if new language constructs are added to CommonSens. In the following we
discuss the parameters of the input triple.
Environment Instances
The environment instances that we use in the functionality tests are synthetic and designed to
simplify the evaluation of the language constructs. They are not inspired by any real environment and contain a set of sensor objects and a room object. In order to evaluate different
language constructs, we have created three environments with different numbers of sensors and
sensor placement. The environments are shown in Figure 6.1. For example, concurrency can
be evaluated by having sensors that have coverage areas that cover each other. This is evaluated
since both sensors are supposed to report data tuples at the same time. Sensors with adjacent
coverage areas can be used to investigate consecutiveness, i.e., that the sensors should report
data tuples that match a condition in a given order.
All the sensors in the environments have circular coverage areas that are not affected by
any of the other objects in the environment. This is done because we do not want to evaluate
the coverage area calculation in the section. The sensors simulate an RFID reader/tag scenario.
This makes it easy to identify the location of the monitored person by using the coverage areas
of the physical sensors. The sensors that are placed in the environments act as RFID readers while the monitored person wears an RFID tag. The RFID readers provide the capability
DetectPerson. Occasionally, the RFID tag is inside the coverage area of a sensor. When this
occurs, the RFID reader reports Person1 when it is pulled. Otherwise, the RFID reader does
not send a data tuple at all. It is irrelevant what type of sensor we use, which corresponds to our
concept of capabilities. Capabilities and sensors are only coupled during the query instantiation.
However, we assume that the sensor is reliable and returns correct results.
Environment instance e1.env consists of six sensors S1 to S6. These sensors are meant
to approximate LoIs that are used in the complex queries. The LoIs are discussed later in the
functionality test description. S1 to S4 are located in each of the corners of the environment.
S6 has a larger coverage than the other sensors. In addition it covers the coverage area of S5.
Environment instance e2.env consists of four sensors S1 to S4. All the four sensors have
coverage areas that cover each other. This means that when the monitored person moves into
the coverage area that is shown in the figure, all the sensors report Person1 if they are pulled.
83
Class
a)
b)
c)
d)
e)
f)
g)
h)
i)
j)
k)
l)
m)
n)
o)
p)
Filename
m5.mov
m6.mov
m8.mov
m7.mov
m11.mov
m2.mov
m3.mov
m4.mov
m12.mov
m13.mov
m14.mov
m15.mov
m16.mov
m9.mov
m1.mov
m10.mov
m13.mov
m17.mov
m18.mov
m19.mov
m20.mov
m21.mov
m22.mov
m23.mov
m24.mov
m25.mov
m26.mov
m28.mov
m27.mov
m29.mov
m30.mov
m31.mov
Movement patterns
1 center, 6 bottom left, 1 center
1 centre, 5 bottom left, 2 centre
1 centre, 3 bottom left, 4 centre
1 centre, 1 bottom left, 6 centre
1 centre, 1 bottom left, 10 centre
2 centre, 2 left, 2 centre
1 centre, 1 top centre, 1 center
1 centre, 1 top left, 1 centre
1 centre, 1 top left, 10 centre
1 centre, 6 top right, 1 centre
1 centre, 5 top right, 2 centre
1 centre, 3 top right, 4 centre
1 centre, 1 top right, 6 centre
1 centre, 1 top right, 10 centre
2 centre, 2 bottom right, 2 centre
1 centre, 1 bottom right, 10 centre
1 centre, 3 top left, 1 centre, 6 bottom right, 1 centre
1 centre, 1 bottom left, 1 centre, 4 top right, 5 centre
1 centre, 1 bottom left, 1 top left, 1 top right, 6 bottom right, 2 centre
1 centre, 1 bottom left, 1 top left, 1 top right, 1 bottom right, 7 centre
1 centre, 3 bottom left, 3 top left, 3 top right, 3 bottom right, 1 centre
1 centre, 2 bottom left, 3 top left, 3 top right, 3 bottom right, 1 centre
1 centre, 3 bottom left, 3 top left, 2 top right, 3 bottom right, 1 centre
1 bottom right, 1 bottom centre, 5 centre, 1 top centre, 1 top left
2 bottom left, 6 bottom centre, 2 bottom right
3 top right, 17 bottom right, 3 bottom left, 7 bottom right, 10 top left
7 top right, 17 bottom right, 3 bottom left, 7 bottom right, 10 top left
3 top right, 6 bottom right, 6 bottom left, 7 bottom right, 10 top left
3 top right, 7 top left, 2 bottom right, 30 top left
3 top right, 17 bottom right, 3 bottom left, 21 bottom right
10 bottom centre, 10 centre, 10 bottom centre
10 bottom centre, 10 centre, 10 top right
Table 6.3: Mapping between movement pattern classes and movement patterns.
84
LoI1, LoI7
LoI4
LoI5
LoI2
LoI3, LoI8
LoI6
Figure 6.2: LoIs used in functionality tests.
In environment instance e3.env S1 and S2 overlap on the upper right corner while S3 and S4
overlap in the lower left corner.
Locations of Interest
The LoIs and their locations in the environment are presented in Figure 6.2. In total, all the
queries use eight LoIs. LoI1 to LoI4, and LoI7 and LoI8 are defined to be in the corners of the
environment. LoI5 is defined to be in the middle of the environment, while LoI6 is larger than
the other LoIs and fully covers LoI5. LoI1 and LoI7, and LoI3 and LoI8 are defined to be in the
same area.
This type of spatial definition allows us to investigate ∧ and concurrency, since these language constructs require that two or more atomic queries at some point of time are processed
concurrently. For instance, we can state in two atomic queries that LoI1 and LoI7 should
be evaluated concurrently. If the monitored person is located in the approximation of these
two LoIs and the temporal specifications in the queries are matched, both atomic queries are
matched. On the other hand, one can state that LoI1 and LoI8 should be matched concurrently,
which we know is an impossible state since the two LoIs are located apart from each other in
Figure 6.2. However, it is important that CommonSens reports this correctly as well.
The placement of LoIs and sensors is not random. When a LoI is addressed in one or
more of the atomic queries, it is approximated by one or more sensors in the environment
instance. However, in this section we do not evaluate the LoI approximation, only the language
constructs. Evaluation of the LoI approximation is performed in Section 6.1.2.
Workloads
The workloads we use for our simulations are movement patterns of a virtual monitored person.
Since all the sensors provide the capability DetectPerson it is sufficient to use movement
patterns when evaluating complex event processing and deviation detection. This is because
they provide data tuples that indicate whether the monitored person is located inside the approximation of a LoI or not. This is implicitly supported when we use the synthetic workloads.
When we want to investigate P -registration, we have to let the monitored person stay inside the
coverage area for a sufficient amount of time. The movement patterns consist of discrete steps,
85
a)
b)
c)
d)
e)
f)
g)
h)
i)
j)
k)
l)
m)
n)
o)
p)
Figure 6.3: Movement pattern classes used in functionality tests.
86
top centre
top left
top right
centre
right
left
bottom right
bottom left
bottom centre
Figure 6.4: Nine possible locations in the environments.
and we simply count simulation steps, which are equal to epochs, i.e., the time it takes to collect
all the data tuples and process them [MFHH05].
In order to evaluate the query processing and deviation detection, we have created 16 sets
of movement pattern classes that aim to show movement between different coordinates in the
environment. These sets are shown in Figure 6.3. Each of these classes contains one or more
movement patterns, i.e., the monitored person moves between the same coordinates as in the
class but stays at the coordinates in different numbers of epochs. This means that timing and
P -registration can be evaluated. In addition, all the other language constructs can be evaluated
by similar movement patterns. For instance, a complex query that investigates the → relation
can be evaluated by sending the monitored person to different coordinates in the environment.
Each of these coordinates can be inside the approximation of the LoIs that are addressed in the
complex query.
In total, the current functionality test uses 31 different movement patterns. The possible coordinates correspond to one of nine locations in one environment. These locations are
intuitively named top left/center/right, left, center, right, and bottomleft/center/right, and are shown in Figure 6.4. The movement patterns are simple;
they do not contain the coordinates that are located on the line between any two coordinates.
This means that the virtual monitored person actually moves discretely between the coordinates.
Even though this is no natural movement, it is sufficient for our evaluation. If the monitored
person moves to a coordinate that is covered by one or more sensors, the monitored person can
be detected by these sensors.
Table 6.3 shows the mapping between the movement pattern classes and the movement
patterns. Some classes, e.g. e) and l), contain many movement patterns, while other classes,
e.g., m) and b), contain only one movement pattern. One may extend the list with additional
movement patterns.
87
Complex Queries
The final part of the functionality test triple is the complex query. We have created a set of
47 complex queries that are used to investigate the language constructs. We begin with simple
queries that only address the capability with a value ([(DetectPerson==Person1)])
to more complicated queries that combine temporal properties and consecutiveness. All the
complex queries are shown in Tables A.2, A.3 and A.4. As with all the other parameters in the
input triplet, the list of complex queries can be extended.
Tests Examples
Currently, we have performed 182 functionality tests. All the functionality tests with their
expected values are shown in Table A.1. The functionality tests are divided into sets that relate
to each language construct, or combinations of language constructs. In order to show how the
tests are implemented, we show two subsets of the functionality tests which are described more
thoroughly than the remaining functionality tests. A discussion involving all the functionality
tests is unnecessary since Table A.1 contains information about each set. When the two subsets
are presented we discuss the results. We finally present a critical assessment of the functionality
tests and how they are designed.
The syntax of our query language allows the application programmer to explicitly state
interest in deviations or not. This is done by surrounding a list of atomic queries with dev().
However, to simplify the automation we have changed the semantics in the functionality tests.
We do not use dev() in any of the queries. Instead, the query evaluator is instructed to report
a deviation if the workload deviates from the complex query (values 3, 7 and 8 in Table 6.2) or
report that the query evaluation was successful (0) or did not start (6).
Timing is essential in CommonSens, and Language construct I is evaluated in each set.
This is because even atomic queries that do not have temporal specifications are included in
this set of tests, since they should be processed differently than atomic queries with temporal
specifications. In addition to investigating that the results from the functionality tests are correct,
we investigate the time consumption of the query processing. We do this in order to evaluate if
CommonSens manages to process data tuples within near real-time. The time consumption is
reported as the average processing time in milliseconds for each of the atomic queries. This is
done for each of the workloads. Since the functionality tests use synthetic sensors, we cannot
include the time it takes to gather data tuples from the sensors over a network. The event
processing time we measure is the time interval from the point in time when all the relevant data
tuples are collected until CommonSens has evaluated the data tuples. The time consumption
results, i.e., average, minimum and maximum time consumption are shown in the table. The
issues related to time consumption are discussed more thoroughly in Section 6.2.
The first examples are functionality tests number 178 to 182. These tests evaluate timing and
consecutiveness. We use complex query cq46.qry in Table A.4, the environment instance e1.env and the five workloads m25.mov to m29.mov. The complex query uses the followedby relation → to describe consecutive atomic events.
[(DetectPerson==Person1, LoI1, 10, max 50%) ->
(DetectPerson==Person1, LoI2, 10, min 50%) ->
(DetectPerson==Person1, LoI3, 21, 30, max 50%) ->
88
(DetectPerson==Person1, LoI4, 31, 40, min 50%)]
The complex query addresses LoI1 to LoI4. To match the complex query, the monitored
person should move between the corners of the environment instance and stay in the corners
for a certain amount of time. This amount depends on the current atomic query. The first
atomic query is δ-timed and states that the sensors that cover LoI1 should report that Person1
is detected. The simulation steps, i.e., epochs, are denoted by time units from t1 to tx . For
instance, the atomic event that should match the first atomic query should occur maximum 50%
of ten time units. The second atomic query is similar to the first, but the event should occur
for minimum five time units in LoI2. The two last atomic queries are timed and should occur
maximum 50% between t21 and t30 , and minimum 50% between t31 and t40 .
m25.mov matches the complex query. Table 6.3 shows that m25.mov first contains three
steps in the top right corner in the environment (see Figure 6.4). The P -registration of the first
atomic query is set to max 50% of 10 steps. Therefore, the three first steps of the workload are
sufficient for successful evaluation. These steps are followed by 17 steps in the bottom right
corner, three steps in the bottom left corner, seven steps in the bottom right corner and ten steps
in the top left corner. Translated to LoIs that are addressed in the complex query it means that the
monitored person moves to LoI3 from LoI1 through LoI2. Then, the monitored person moves
back to LoI2 before stopping in LoI4. This movement pattern matches the complex query, even
though the monitored person moves to LoI4 via LoI2. It is not stated in the complex query that
this is not allowed, as long as LoI4 is visited for minimum 50% of the time steps between t31
and t40 . Note that CommonSens waits until the end of the time window before it identifies the
deviation. An alternative solution would have been to stop the evaluation of the atomic query at
once the maximum P -registration value has been exceeded.
The succeeding workloads give a deviation in one of the atomic queries. Movement pattern
m26.mov, which is used in test number 179, begins with seven steps in the top right corner.
This means that the maximum limit of five steps should be broken already here. The remaining
workloads aim to make the query evaluator report a deviation in each of the three consecutive
atomic queries. The third workload (m27.mov) only reaches LoI2 at six steps moving to LoI3
at t9 , gives a deviation at t19 when the minimum P -registration value in the second atomic
query is not matched. The fourth workload (m28.mov) gives a deviation at t25 , since the
DetectPerson==Person1 event in LoI3 occurs for more than five time units. Finally, the
fifth workload (m29.mov) gives a deviation at t40 since the event does not occur when it should.
The second examples are functionality tests number 172 and 173. These tests evaluate
timing and the concurrency class D URING. We use complex query cq47.qry in Table A.4,
the environment instance e1.env and the workloads m30.mov and m31.mov. The complex
query is as follows:
[during([(DetectPerson==Person1, LoI5)],
[(DetectPerson==Person1, LoI6)])]
Person1 should first be detected in LoI6, and during this event Person1 should also
be detected in LoI5. Finally, and in order to preserve the conditions of the D URING class,
Person1 should be detected only in LoI6. The workload m30.mov matches the complex
query correctly. The monitored person is first located in the bottom centre of the environment
89
Table 6.4: Results from functionality tests 178 to 182.
Test number Average Minimum Maximum End time
Result
178
0.18
0.05
0.32
39
True (0 = 0)
179
0.37
0
0.8
4
True (3 = 3)
180
0.16
0
0.3
19
True (3 = 3)
181
0.22
0.08
0.38
25
True (3 = 3)
182
0.16
0.05
0.34
40
True (3 = 3)
Table 6.5: Results from functionality tests 172 and 173.
Test number Average Minimum Maximum End time
Result
172
0.74
0.53
1.1
40
True (7 = 7)
173
1.01
0.61
1.48
30
True (0 = 0)
for ten steps. This position starts the evaluation of the second atomic query, since this position
is inside LoI6. Afterwards, the monitored person moves to LoI5, which is inside LoI6, i.e., both
atomic queries are evaluated concurrently. Finally, the monitored person moves back to LoI6.
m31.mov gives a deviation, since the monitored person moves to the top right corner. This is
outside LoI6, which violates the definition of the D URING concurrency class.
The results from the first five functionality tests are shown in Table 6.4. The results are as
expected, i.e., the first workload matches the complex query, whereas the remaining workloads
deviate from the complex query at different steps. Results from the functionality tests 172 and
173 are shown in Table 6.5. As expected, the first workload does not give any deviation, and
the evaluation stops at once LoI6 is reached for the second time. The second workload gives a
deviation at once LoI1 is reached instead of LoI6.
The average processing time of an atomic query in the first complex query is less than
0.37 milliseconds. The minimum processing time is 0, which means that it takes less than a
millisecond to process the atomic query. For the second complex query, which processes two
complex queries concurrently, the average processing time is higher. The maximum processing
time is 1.48 milliseconds. This indicates that CommonSens is fast, and that the query processing
is no bottleneck for detecting deviations in near real-time.
All the results from the functionality tests are shown in Table A.5. All the 182 tests are
successfully evaluated, i.e., the expected value and the return value match. We have tried to
identify a representative set of complex queries with matching and deviating workloads, however, we can not conclude that CommonSens detects complex events and deviations from all
types of workloads. We can only state that the functionality tests are successfully evaluated. On
the other hand, we combine environment instances, workloads and complex queries that address
only certain language constructs. We also use workloads that deviate from the complex queries
in order to show that deviation detection also works. However, with respect to P -registration,
only 25% and 50% are evaluated. In addition we also see that one P -registration test is missing.
We do not have a workload that investigates P -registration that matches the conditions like in
Figure 3.6. However, the experiments in Section 6.1.2 indicate that this is working correctly.
The timing and δ-timing is also static, i.e., δ-timing is mostly evaluated with atomic queries that
state that the event should last for five steps, e.g. cq26.qry. The timing is mostly from step
90
Figure 6.5: The environment in CommonSens and in the real world.
one to step six, e.g., cq29.qry. Only the final complex queries use other temporal specifications, e.g. cq46.qry. Timing of complex queries needs more evaluation. Functionality tests
174 to 176 only investigate a P -registration with max 100%. A thorough evaulation of these
issues is required in future work. Despite these remarks, the results from the functionality tests
indicate that CommonSens manages to detect complex events and deviations. In the following
section, we strengthen this indication by evaluating CommonSens in real-world scenarios.
6.1.2 Real-world Evaluation
In this section we evaluate the real-world detection of complex events in CommonSens. We
use real sensors that CommonSens pulls in real-time. In real-world evaluation it is important
that the complex queries are instantiated correctly, and that the FPP ROB values are as low as
possible. Therefore, in the real-world evaluation we evaluate on functional aspects of CommonSens related to spatial issues, including coverage area calculation, sensor placement, and LoI
approximation.
Reducing the probabilities of false positives is not trivial, especially not when applying
sensors that use radio signals, e.g. RFID tags and readers. Even though the functionality tests
in Section 6.1.1 emulate RFID tags and readers, applying these types of sensors in the real
world is not straight forward, and one must assume that there are many false positives and false
negatives [NBW07]. In addition, the sensors have to be placed so that the I SEC and N O I SEC
sets are unique for each LoI. Otherwise, CommonSens will give wrong results because it is not
clear which LoI having activity.
The real-world evaluation is done through a use-case, which consists of an environment that
is instantiated in CommonSens and in the real world. The environments are an office and an
office hallway. Both environments are equipped with cameras. We first evaluate spatial issues
and complex event processing in the office environment. Second, we investigate complex event
processing in the office hallway environment.
91
Office Environment Design, Method and Results
Although the office environments are not similar to homes, we can use this environment since
it can contain sensors and LoIs. This is a strength with CommonSens; it can be used in many
different application domains. Future work aims to evaluate CommonSens in homes as well.
The use-case consists of a series of experiments:
1. Demonstrate that radio signals propagating through walls can lead to false positives.
2. Investigate how our calculation of coverage area matches the real world coverage area of
a radio based sensor.
3. Show how we reduce the probability of false positives.
4. Increase the number of sensors to approximate the LoIs with a lower probability for false
positives.
5. Place an obstacle in the environment and show how this obstacle reduces the coverage
area of one of the sensors, which again leads to a higher probability of false positives.
The office environment consists two rooms, Room A and Room B. The queries address two
LoIs: LoI1 and LoI2. The two LoIs are located in Room A. The room has also an area called
Area A, which is defined by dotted lines. The virtual CommonSens instance is illustrated in
Figure 6.5 a). The real world instance is shown in Figure 6.5 b).
We use two MICAz motes [xbo] M1 and M2 in the first experiment to emulate an RFID
reader (M2) and an RFID active tag (M1). RFID readers and tags are commonly used in automated home care for localisation of events [NBW07]. We place M1 in the middle of LoI2.
By applying the algorithm R EDUCE for calculating coverage area, we get an indication of how
the signals will pass through the wall. In order to show that this will result in false positive, we
investigate three scenarios: (1) We place M2 inside LoI1, (2) we move M2 to LoI2, and (3) we
move M2 to predefined coordinates on the other side of the wall. The metric we investigate is
the success rate, which is the rate of transmitted packets from M1 and received packets at M2.
Since the signal passes through walls, the success rate 1 is during the entire experiment. This
means the system reports many false positives.
In the second experiment, we increase the distance between M1 and M2. The second metric
we use is the received signal strength indicator (RSSI) value. We use the RSSI value together
with the success rate from the first experiment. M1 is still placed in LoI2. We first put M2
next to M1. For each measurement we increase the distance between M1 and M2 with one
meter. This locates M2 next to the wall. It is important to know the permeability value of the
wall between the two rooms, since this value is used to predict how much the signal strength is
reduced when it passes through the wall. The permeability value of the wall is not available, so
it has to be estimated empirically. To estimate the permeability value of the wall, we perform an
additional measurement next to the wall in Room B. The thickness of the wall is approximately
12.5 cm, but we do not know the material. We set the permeability values for the rooms and
the wall by using the experimental values as input. We set m = −90dBm, which is the receive
sensitivity of MICAz, i.e., the lowest signal strength that the MICAz motes can receive.
92
RSSI (dBm)
10
0
-10
-20
-30
-40
-50
-60
-70
Wall Room B
Room A
0
1
2
3
4
5
Distance from sensor (meter)
Experimental results
Model
Figure 6.6: Comparison of received and calculated signal strength.
We use Signal Model 1, which means that we have to set the initial value P0 . The initial
RSSI value is given by measuring while the two sensors are next to each other. Based on this
measure we set P0 to 6.26dBm.
Figure 6.6 shows the RSSI values with our algorithm for coverage area calculation using
Signal Model 1. The dashed line shows the experimental results while the other line shows
the expected reseults. In Room B the measured RSSI values slightly increase while the model
values decrease. We assume that this effect is due to multipath fading. This result indicates
that we can find a good match between measured values and model values in Room A, but the
simple signal propagation model provided by Signal Model 1 is not sufficient to correctly model
the signal strength in Room B. The conclusion from the two signal propagation experiments is
that for radio based signals there is need for more extensive studies. In addition, our results
indicate that using radio based sensors can lead to a considerable number of false positives, and
the application programmer has to be aware of these pitfalls.
We use the experience from the two first experiments to try sensors that are not based on
radio signals for sensing. In the third experiment we exchange the MICAz motes with one
web-camera. The web-camera is called Cam1_0 and is located at the bottom (Figure 6.5 a)).
The walls have a permeability value of 0 for light, i.e., a monitored person in Room B cannot
create false positives. We have measured the angle of the coverage area to be 45.8◦ , and place
the camera so that it points towards the corner where LoI1 and LoI2 are situated.
A web-camera only provides capabilities that are related to streams of images, e.g. a capability ImageStream. We are interested in more than just a stream of images. Hence, in order
to provide additional information, the stream has to be aggregated with other sensors.
In order to keep the implementation of the following experiments simple, we introduce
a capability called DetectMotion. DetectMotion returns a boolean value that tells if
motion is detected or not. We assume that there exists at least two different types of sensors
that provide this capability; physical sensors like motion detectors and logical sensors that use
cameras merged with motion detection functionality. We focus on the logical sensor.
The logical sensor depends on the capabilities MotionDescription and ImageStream. We use the computer vision library OpenCV [ope] to provide the capability MotionDescription. OpenCV is an open source C++ library that simplifies image processing by
abstracting away all the image processing algorithms. The user only needs to call a set of
93
Sensors
Motion in LoI1
Cam1_0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Time (seconds)
Sensors
Motion in LoI2
Cam1_0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22
Time (seconds)
Sensors
Motion in Area A
Cam1_0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22
Time (seconds)
Figure 6.7: Real world experiments with only one camera covering LoI1 and LoI2.
predefined functions. Motion detection is supported by one of the sample programs that the
OpenCV source code provides. ImageStream is provided by the web-camera. All we need
to do is to extend this sample program so that it communicates with the logical sensor. In order
to let the logical sensor and OpenCV communicate, we use non-blocking sockets. OpenCV
obtains the image stream from the web-camera. If the OpenCV application detects motion a
boolean value is set to true. When the motion stops, this boolean value is set to false.
We apply a simple query that detects motion in the two LoIs.
[(DetectMotion == true, LoI1, 10, min 50%) ->
(DetectMotion == true, LoI2, 10, min 50%)]
The query states that first there should be motion in LoI1. This should be registered in
minimum 50% of the specified 10 seconds, i.e., 5 seconds. Afterwards, motion should be
detected in LoI2 for at least 5 seconds. Note that in the real-world evaluation we specify the
time in seconds and not in epochs.
The instantiation of the complex query gives the following I SEC and N O I SEC sets:
I SEC (LoI1) = {Cam1_0}
N O I SEC (LoI1) = ∅
I SEC (LoI2) = {Cam1_0}
N O I SEC (LoI2) = ∅
(6.1)
FPP ROB for both LoIs is 0.9. Since the two I SEC sets are similar, the system will report
false positives. We confirm this with three tests. First, we create motion only in LoI1, then only
in LoI2. Finally, we provoke false positives by creating movement in Area A. The results are
94
Motion from LoI1 to LoI2
Sensors
Cam3_0
Cam2_0
Cam1_0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time (seconds)
Motion from LoI1 to Area A
Sensors
Cam3_0
Cam2_0
Cam1_0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time (seconds)
Figure 6.8: Real world experiments with three cameras covering LoI1 and LoI2.
shown in Figure 6.7. The x-axis shows the duration of the experiment and the y-axis shows
the sensors. CommonSens samples the sensors once per second. The data tuples that contain
the value true are shown as black squares. The plots also contain arrows that show the start
time of the evaluation, when the condition is fulfilled, and when the time window of 10 seconds
is finished. As expected, the query finishes successfully in all the sets. This means the spatial
events are detected, but since we only use one sensor, CommonSens also reports false positives.
In the fourth experiment we use all the three web-cameras. CommonSens reports that
FPP ROB (LoI1) is 0.61 and that FPP ROB (LoI2) is 0.80. The instantiation of the complex
query gives the following I SEC and N O I SEC sets:
I SEC (LoI1) = {Cam1_0, Cam2_0, Cam3_0}
N O I SEC (LoI1) = ∅
I SEC (LoI2) = {Cam1_0, Cam3_0}
N O I SEC (LoI2) = {Cam2_0}
(6.2)
For the first set of experiments we create motion in LoI1 until the first atomic query is
satisfied. We then create motion in LoI2 until the second part of the query is satisfied. In the
second set of experiments we move from LoI1 to Area A instead of LoI2.
The results of our experiments are shown in Figure 6.8. Both plots show that the two atomic
queries in the complex query are satisfied. The plots also show that the P -registration works
as specified. It is stated that 50% of the data tuples should match the condition, but it does not
necessarily have to be a consecutive list of matching data tuples. The plots from Figures 6.7
and 6.8 show this. We conclude that even with several sensors, it is important that the LoIs are
approximated carefully.
95
The fifth experiment is performed to show how obstacles in the environment stop the signals. We put a cardboard box in front of Cam3_0. When updated with this new information,
CommonSens changes the I SEC sets for both LoIs to Cam1_0 and Cam2_0. We perform similar movement patterns as we did with the fourth experiment. The system does not manage to
differentiate between the two workloads, since both web-cameras cover both LoIs. The result
is as expected.
With our first five experiments, we first show an obvious effect of radio based sensors, i.e.,
their signal passes through walls and can lead to false positives. Furthermore, our results show
that a simple signal propagation model is not sufficient to perfectly calculate coverage areas of
radio based sensors. However, for objects with permeability 0, coverage area calculation works
correctly. Finally, we show that CommonSens manages to handle sensor readings in close to
real-time, and automatically chooses sensors based on the current sensor setup and query.
The experiments have shown that if the FPP ROB values for the LoIs are too high, we can
simply not trust the results from the complex queries. This also applies to situations where the
I SEC and N O I SEC sets are not unique, i.e., two or more LoIs are approximated by equivalent
I SEC and N O I SEC sets. CommonSens relies on correct sensor placement. Future extensions of
CommonSens is to include a test in the instantiation phase of the complex queries that verifies
unique LoI approximation. However, there is still a probability of false positives. Given the environment instances, CommonSens managed to detect the complex events correctly. Therefore,
we can also conclude that the complex events are detected correctly in these experiments.
Hallway Environment Design, Methods and Results
With the hallway experiment we show that CommonSens manages to handle a larger number
of sensors. We have equipped the office hallway with nine IP-cameras, named Cam1_0 to
Cam9_0, which provide the capability ImageStream. We use the same logical sensor that
we used in the previous section and use OpenCV as the provider of the capability MotionDescription. An overview of the hallway and the cameras is shown in Figure 6.9. The
overview of the hallway is taken directly from the environment creator in CommonSens. We
have included pictures that show the views from the cameras as well. For example, Cam1_0 is
located on the top of the entrance to the hallway, while Cam9_0 is directed towards a wall, covering the LoI CoffeeMachine. In addition to the LoI CoffeeMachine, the environment
consists of the LoIs HallwayInner, HallwayMain and HallwayTurn.
We use a simple complex query that aims to investigate movement between the four LoIs:
[(DetectMotion
(DetectMotion
(DetectMotion
(DetectMotion
==
==
==
==
true,
true,
true,
true,
HallwayInner, 2, min 50%) ->
CoffeeMachine, 2, min 50%) ->
HallwayMain, 2, min 50%) ->
HallwayTurn, 2, min 50%)]
For each atomic query it is sufficient to observe movement for one second. The instantiation
of the complex query gives the following I SEC and N O I SEC sets.
96
Figure 6.9: Overview of the hallway and location of cameras.
97
Cam9_0
Cam8_0
Cam7_0
Sensors
Cam6_0
Cam5_0
Cam4_0
Cam3_0
Cam2_0
280
260
270
240
250
220
230
200
210
180
190
160
170
140
150
120
130
100
110
80
90
60
70
40
50
20
30
Cam1_0
Time
Figure 6.10: Results from the hallway experiment.
I SEC (HallwayInner) = {Cam2_0, Cam7_0, Cam8_0}
N O I SEC (HallwayInner) = {Cam9_0}
I SEC (CoffeeMachine) = {Cam2_0, Cam7_0, Cam8_0, Cam9_0}
N O I SEC (CoffeeMachine) = ∅
I SEC (HallwayMain) = {Cam2_0, Cam5_0, Cam6_0, Cam8_0}
N O I SEC (HallwayMain) = {Cam1_0, Cam3_0, Cam4_0}
I SEC (HallwayTurn) = {Cam1_0, Cam5_0, Cam6_0, Cam8_0}
N O I SEC (HallwayTurn) = ∅
(6.3)
The probabilities for false positives are as follows:
FPP ROB (HallwayInner) = 0.79
FPP ROB (CoffeeMachine) = 0.56
FPP ROB (HallwayMain) = 0.7
FPP ROB (HallwayTurn) = 0.55
(6.4)
We perform the experiment by creating motion in the four LoIs. With nine IP cameras, we
experience that the response is slow. Even though CommonSens is told to pull the cameras once
98
every second, it often takes more time to obtain a data tuple. Figure 6.10 shows the results from
the experiment. The black squares show when the sensors return true, i.e., we do not include
the data tuples that report false. On the other hand, the figure shows that there is a match
between the positive readings and the I SEC and N O I SEC sets. The atomic queries are matched
at t43 , t128 , t224 and t275 . We have to create manually the arrows in the figure, and the arrows
simply show when the processing of the next atomic query started. There is no movement
before t28 , hence this is when the plot starts. The experiment lasts for 280 seconds and returns
large trace files. We have included the trace files for this experiment in A.3.1 to show how the
data tuples look like when they are pulled from the sensors.
The hallway experiment indicates that CommonSens manages to handle larger scenarios
as well. However, we have experienced that a complex logical sensor like the one providing
DetectMotion based on an IP camera and OpenCV has a lower sampling frequency. The
trace files in Section A.3.1 confirm this. CommonSens has to adapt to this sampling frequency
in order to obtain more accurate results. An additional issue is related to the complex query.
It only required one matching set of data tuples in order to match the atomic queries. We are
aware that this P -registration and δ-timing is simple and that it does not sufficiently show that
the P -registration works well. However, despite this issue, we experience that CommonSens
manages to detect the complex event and that it manages to communicate with real sensors as
well.
6.1.3 Trace File Evaluation
We use trace files obtained from the work of Cook and Schmitter-Edgecombe [CSE09]. We
perform the real-world evaluation to show that CommonSens also manages to read trace files
from related work and detect complex events and deviations. If CommonSens can use these
trace files it is possible to compare CommonSens with other automated home care systems. In
addition, reading trace files is also a way to acid-test the system and to identify new issues that
we have yet not addressed in CommonSens.
Several sensors are placed inside a home, and subjects are told to follow given patterns. For
instance, one of the patterns relates to talking in the telephone. The monitored person should
look up a specified number in a phone book that is taken from the shelf, call the number, and
write down the cooking directions given on the recorded message. In addition, the monitored
person should put the phone book back on the shelf. These trace files are training data that are
created in order to use statistical methods, e.g. for detecting deviations. This means that Cook
and Schmitter-Edgecombe have another approach than CommonSens, but we can still use their
trace files.
One important issue is that CommonSens uses the concept of LoIs. This concept is not
supported by related work. Hence, the data set from Cook and Schmitter-Edgecombe does not
include LoIs, and we could not include this concept in the queries that describe the order in the
phone event. In addition, it is not clear how the motion detectors in the environment work and if
they have a coverage area or if they are touch-based. Touch-based motion detectors are actually
switches that are turned on if the monitored person steps on them. Such sensors are usually
placed in a carpet or underneath the floor boards. We have interpreted that the motion detectors
have coverage areas. Therefore, we have created a LoI ByTheTelephone, which is covered
99
by a motion detector.
We have used one of the trace files as template to define the duration of the correct pattern.
The expected duration of the complex event is set to be 119 seconds, but only 10% of the pulls
result in a matching data tuple. We use the during concurrency class to define the complex
event.
[during([(DetectPhoneBookPresent == ABSENT) ->
(PhoneUsage == START)->(PhoneUsage == END) ->
(DetectPhoneBookPresent == PRESENT)],
[(DetectMotion == ON, ByTheTelephone, 119, min 10%)])]
When we evaluate the template trace file the query is successfully processed. In order to
detect a deviation, we have used another trace file that does not contain the same pattern; the
monitored person does not put the phone book back on the shelf. This should give a deviation.
After 119 seconds the window ends with a match of 7.6%, which results in a deviation.
Although CommonSens manages to read the trace files, we can only use one trace file to
create a complex query. This forces the monitored person to follow very strict patterns, and humans tend to be more dynamic than what our complex queries support. In the example above, it
would have been required to create one complex query for each of the trace files, and run all the
complex queries concurrently. However, the trace files from Cook and Schmitter-Edgecombe
are used to generate statistical evaluations of patterns, which is not currently supported by CommonSens. We conclude that CommonSens manages to read trace files. On the other hand, human behaviour is not static, and there is a need to extend CommonSens with functionality that
supports this. This includes supporting the complex queries with training data that includes the
variations in behaviour. This is further discussed in Section 7.3.
6.2 Scalability and Near Real-Time Event Processing
It is important that the system detects the events in near real-time, i.e., that the system detects
events when they happen. In this section we evaluate the processing time and scalability of
CommonSens with respect to near real-time detection of events, performance and scalability.
We want to answer two questions through our experiments:
1. How does the number of sensors to be evaluated influence the processing time? In some
applications there might be need for a considerable number of sensors.
2. How does the complexity of queries affect the processing time?
Design and Method
To answer the first question, we need to increase the number of sensors that an instantiated
complex query uses. We select one of the complex queries from the functionality tests. It is most
convenient to use one of the queries that we have already used. Therefore, we use the complex
query cq46.qry. It provides consecutiveness, timing, δ-timing and P -registration. We use
environment e1.env and the workload m25.mov. We choose m25.mov since it matches the
100
4
Experiment #1
Experiment #2
Experiment #3
Experiment #4
Experiment #5
3.5
Time (milliseconds)
3
2.5
2
1.5
1
0.5
0
0
5
10
15
20
Time steps
25
30
35
40
Figure 6.11: Processing time with 6, 66, 126, 186, and 246 sensors in the environment.
complex query correctly. The parameter triple corresponds to the triple that functionality test
176 uses.
To ensure that the evaluated number of sensors is increasing at all evaluation steps for each
experiment, we add to each original sensor ten additional sensors for each experiment. Thus, in
the first experiment we start with the six sensors that provide the capability DetectPerson.
We perform additionally four experiments with 66, 126, 186, and 246 sensors in total. The new
sensors inherit the shapes and capabilities from the sensors that are already there.
The second question is answered by increasing the number of concurrent queries that are
processed. To answer this question we use the complex query cq47.qry together with the
environment e1.env and workload m30.mov. We increase the number of ∧ operators and
atomic queries. This is done by adding ∧ between the atomic query (DetectPerson ==
Person1, LoI5). In practice this means that when we add ∧ operators and atomic queries,
we get the following complex query:
[during([(DetectPerson==Person1, LoI5) && ... &&
(DetectPerson==Person1, LoI5)] ,
[(DetectPerson==Person1, LoI6)])]
We increase from 0 to 50 ∧ operators. Even though it is not realistic that the monitored
person should be detected in the same LoI up to 51 times, we show that our system manages
to handle that amount of concurrent atomic queries. We keep in this experiment the number of
sensors fixed and use the original installation with two sensors.
101
We run each of the experiments 10 times to get an average processing time. The average
processing times for the first experiment are shown in Figure 6.11. The x-axis shows the time
steps of the simulation and the y-axis shows the processing time in milliseconds (ms).
Results
The results in Figure 6.11 show the processing time and it is clear that the processing starts at
once. This corresponds with the movement pattern, which spends the three first steps in LoI1.
At the fourth step the monitored person moves to LoI2 and stays there for 17 steps. However,
the processing of the second atomic query does not start until the tenth step in the experiment.
This corresponds with how the max operator is implemented. The query evaluator never stops
evaluating an atomic query until te has been reached. This is because the data tuple selector
might send new data tuples that match the condition in the atomic query and perhaps violate the
max limit. However, there are no additional matching data tuples, which means that there is not
much processing done. The evaluation of the second atomic query starts at t10 . The evaluation
continues until t19 . Note how the evaluation of the second atomic query continues even though
the min value is reached. This is because the data tuples match the condition. This is a result
of how the query evaluation is implemented. The batch of data tuples is matched against the
condition before the temporal conditions and P -registration are evaluated. This needs to be
fixed in future upgrades of the implementation. For timed queries the data tuple selector does
not start sending data tuples before the timestamp matches tb . This is seen at t20 , where there
is time consuming query processing. At t19 , the monitored person moves to LoI3 and stays
there for three steps. The max in the third atomic query is correctly matched at t22 , when the
monitored person moves back to LoI2. The monitored person stays in LoI2 for seven steps
before moving to LoI4. According to cq46.qry the fourth atomic query is timed and the
evaluation should start at t31 . This corresponds to the plot in Figure 6.11.
The maximum processing time for 246 sensors is 4 milliseconds. This happens at t39 when
the complex query is finishing. However, since the fourth atomic query only investigates single
LoIs, the maximum of sensors is 41, which is the maximum number of sensors that cover one
single LoI. The data tuple selector sends a batch of 41 data tuples to the current Box object.
Since there is only one atomic query, the time consumption that is shown in the plot is the time
it takes to iterate through the batch and compare it to the condition. Since all the data tuples
match the condition in the atomic query, the whole batch of data tuples is iterated.
The results from the second experiment are shown in Figure 6.12. According to m30.mov,
the first 10 steps are covered by S6 only. The data tuple selector has to send the data tuples to
a ConcurrencyBox object. Note that there is a slight difference in processing time between the
five experiments. At this point of time we can not explain why this happens. The processing
time should have been similar for all the experiments since only one atomic query is evaluated
at that time. However, for the first experiment the average time consumption between t1 and t10
is 2.4 milliseconds, while for the fifth experiment the average time consumption is milliseconds.
At t10 , the time consumption increases considerably, however it seems like the time consumption increases linearly with the number of ∧ operators. Except for an outlier at t18 , which is
at 27 milliseconds, the average processing time in the fifth experiment is approximately 11
milliseconds, while for the first experiment it is 0.58 milliseconds. The linear increase can be
102
30
1 atomic query
11 atomic queries
21 atomic queries
31 atomic queries
41 atomic queries
51 atomic queries
25
Time (milliseconds)
20
15
10
5
0
0
5
10
15
Time steps
20
25
30
Figure 6.12: Processing time with an increasing number of concurrent queries.
explained by how the Box object handles ∧-lists. As stated in Chapter 5, all the atomic queries
that related with ∧ or ∨ operators are located in a Box object that the data tuple selector sends
data tuples to. What we see is the effect of the iteration through the ∧-list. This means that we
do not see the iteration through the batch of data tuples. This happens because the data tuple
selector only sends one data tuple to the box. In addition, we do not see any iteration through
the ∨ operators, since there is only one. This means that the experiments have not shown the
full complexity of the event processor, i.e., shown the worst case processing time.
All the functionality tests are also timed. The time consumption is evaluated by measuring
the average processing time of each atomic query. The plot is shown in Figure 6.13. The
average processing time is 0.2 milliseconds, and there is an outlier at functionality test 87 at
12 milliseconds. However, the average processing time for functionality test 87 is 0.64, which
does not differ much from the total average.
Based on the experiments we conclude that the processing time is sufficient for real-time
detection of complex events. The experiments show that even for the highest workload the
event processing part of CommonSens handles real-time processing of data tuples very well.
In our application domain it is not CommonSens that is the bottleneck. However, we have
not investigated the time consumption of pulling the sensors. We have not evaluated this since
CommonSens supports all types of sensors. If the sensors are slow, i.e., that they have a low
sampling frequency, this will affect CommonSens. However, note that the current implementation of CommonSens does not fully support varying sampling frequencies.
103
12
10
Time (milliseconds)
8
6
4
2
0
0
20
40
60
80
100
Regression test number
120
140
160
180
Figure 6.13: Average processing time for atomic queries in the functionality tests.
6.3 Personalisation and User Interface
Our final claim is that CommonSens simplifies the work for the application programmer and
provides personalisation. CommonSens does this by, i.e., reducing the amount of work related
to sensor placement, query writing and detection of complex events and deviations. In this
section we first investigate the personalisation claim; CommonSens queries can be easily used
in many different environments. We demonstrate the low effort required from the application
programmer to personalise the queries through a use-case study and an example taken from the
hallway experiment in Section 6.1.2. Second, we discuss the simplicity of the user interface.
Personalisation and Support for the Application Programmer
Personalisation is one of the strengths in CommonSens. To simplify the work of the application
programmer, CommonSens aims to reuse complex queries in several environments. This is
possible because the query language allows the application programmer to address abstract
concepts like capabilities and LoIs instead of specific sensors. This means that the application
programmer has to perform small changes in the queries, if any, to match a new home. In the
following, we study two scenarios to evaluate the personalisation and to show how this process
works in CommonSens.
First, we show that the personalisation can be performed by only changing a few parameters.
We focus on queries related to detecting the activities falling and taking medication (see Figure
3.5). These activities should be detected in two different instances with minimal rewriting of
104
the queries. The instances are excerpts from two environments taken from related work; the
WSU smart home project [CSE09] and MIT’s PlaceLab apartment [ILB+ 05]. The instances
are shown in Figure 6.14 and are realised through our proof-of-concept implementation. The
instances are equipped with sensors from our sensor model. The walls are all objects with
permeability value 0 for light and 0.01 for radio signals.
For fall detection, timing is not relevant because a fall can happen at any time of the day.
Hence, the temporal properties are not specified. The fall can also happen everywhere in the
home and CommonSens has to constantly pull the sensors that provide FallDetected. The
query that detects the fall is then very simple: (FallDetected == personID). The value
personID identifies the person, and must be updated for the particular instance. The personalisation process is done when CommonSens investigates the available sensor configurations
that provide the capability FallDetected and investigates whether the current instance provides these sensors. If the sensors are provided, CommonSens instantiates the query and starts
reading the data tuples from the relevant sensors. If not, CommonSens informs the application
programmer that the query cannot be instantiated and shows the list of sensors that need to be
in the environment. Note that, in Section 5.2.2 the current implementation of CommonSens
requires that when the LoI is not specified, all the sensors that provide the capability have to
send data tuples that match the condition. For the example above, it would have been more
appropriate to require that only one of the sensors send data tuples that match the condition. A
consistent definition regarding this issue remains an open issue.
In order to detect that the monitored person takes medications, it is sufficient to have sensors
that provide the capability TakingMedication, and which return the type of medication that
has been taken. If the monitored person should take several medications, it is sufficient to use
the ∧ operator between each of the medication types, as long as they are taken at the same time.
This is because of the way we have implemented ∧, i.e., it only accepts uniform matching. If
they should be taken in a given order, the → relation can be used. The D URING concurrency
class can be used to describe the temporal relation. The first part of the query identifies that the
medications are taken while the second part of the query identifies that the monitored person
is within the LoI related to the medication cupboard. This LoI is called MedCupboard and
is defined with different coordinates for the two environments. The query that the application
programmer has to write is based on this template:
during(((TakingMedication == Med1_0,
timestamps) -> ... ->
(TakingMedication == MedN_0,
timestamps)), (DetectPerson == personID,
MedCupboard,timestamps))
In order to show that two different types of sensors can provide the same capabilities, we
have placed two cameras in the kitchen (Figure 6.14 a)) and RFID tags in the bathroom (Figure
6.14 b)). We also show the LoI that the complex query addresses. The camera in the medicine
cupboard covers the LoI MedCupboard. The coverage area of the two cameras has an angle
of 90◦ , and the coverage area of the camera inside the cupboard is reduced by the panels. In the
bathroom, MedCupboard is covered by three active RFID tags named Med1_0, Med2_0 and
105
Figure 6.14: Two environments with different setup.
Med3_0. The three tags are attached to the medication and provide the capabilities TakingMedication and DetectPerson. Affected by the walls in the medication cupboard, the
coverage areas of the tags are shown as small irregular circles. This is automatically calculated
by CommonSens based on the signal type and the permeability values of the walls. The wristworn RFID reader of the monitored person returns the correct readings when it is within the
coverage areas of the tags.
By using the query above, the application programmer only needs to personalise the types of
medications, the timestamps and the coordinates of the LoIs. For instance, the monitored person
Alice should take her medication in a given order between 08:00h and 09:00h every day. This
process should take maximum six minutes and taking each medication should take one minute.
The last part of the query can be rewritten as (DetectPerson == Alice, MedCupboard,
08:00h, 09:00h, min 6%). The timestamps in each of the queries addressing TakingMedication are rewritten to for instance (TakingMedication == Med1_0, 1m). For
the monitored person Bob, each of the medications should be taken at different times during
the day. The application programmer needs to write one query for each of the medications. For
each of the queries the timestamps and coordinates of the LoIs are simply updated to match the
required temporal behaviour. Finally, CommonSens performs the late binding and defines the
I SEC and N O I SEC sets.
Personalisation in Real-world Experiments
Finally, we show a simple personalisation example from the hallway experiment. We show how
we can define the coordinates for a LoI and place the LoI in different locations. CommonSens
adapts the query to the environments by changing the approximation of the LoI. We use a
complex query that addresses a LoI called Hallway. Figure 6.15 shows excerpts from the
106
Figure 6.15: Excerpts from the hallway with the new LoI Hallway.
hallway where the LoI is placed. Note that the coordinates are the same as the coordinates for
the LoIs in Figure 6.9. The complex query investigates movement in Hallway. The condition
only needs to be true in one data tuple in order for the query to finish correctly.
[(DetectMotion == true, Hallway, 2, min 50%)]
The I SEC and N O I SEC sets are equal to the ones for the original LoIs. We run the experiments by generating motion in the LoI Hallway. All queries are matched successfully. This
implies that CommonSens manages to adapt to different environments. In addition, the results
in Figure 6.16 show that the query processing works as well. Each plot shows the data tuples
until the query evaluation stops. Figure 6.16 a) shows the result from the LoI that was originally
HallwayMain. The query evaluation stops when Cam2_0, Cam5_0, Cam6_0 and Cam8_0
report true. This corresponds with the Isec set of HallwayMain. Figure 6.16 b) corresponds to HallwayTurn. Figure 6.16 c) shows the activity in the N O I SEC set as well. The
query evaluation stops only when Cam9_0 stops reporting true at t23 . Finally, in d) the query
is actually matched at t19 , where all the sensors in the I SEC set reports true. However, the
evaluation does not stop until t20 , since this is how min is implemented in CommonSens. The
difference between Figure 6.16 d) and the others is that we were still in the area of the LoI at
t20 . This explains why we do not see that any of the other three queries continue the evaluation
after the query is matched. This is because there was no motion in the I SEC and N O I SEC sets
after the condition was matched.
Based on the use-case study and the hallway experiment, we conclude that CommonSens
manages to adapt to different environments.
107
b) Hallway = HallwayTurn
Time
Time
d) Hallway = CoffeMachine
Time
20
19
18
17
16
15
14
13
12
11
9
Cam9_0
Cam8_0
Cam7_0
Cam6_0
Cam5_0
Cam4_0
Cam3_0
Cam2_0
Cam1_0
10
Sensors
Cam9_0
Cam8_0
Cam7_0
Cam6_0
Cam5_0
Cam4_0
Cam3_0
Cam2_0
Cam1_0
26
25
24
23
22
21
20
19
Sensors
21
20
19
18
17
Cam9_0
Cam8_0
Cam7_0
Cam6_0
Cam5_0
Cam4_0
Cam3_0
Cam2_0
Cam1_0
c) Hallway = HallwayInner
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Sensors
16
Sensors
a) Hallway = HallwayMain
Cam9_0
Cam8_0
Cam7_0
Cam6_0
Cam5_0
Cam4_0
Cam3_0
Cam2_0
Cam1_0
Time
Figure 6.16: Results from the four LoIs that are turned into Hallway.
User Interface
This section explains the steps the application programmer takes to evaluate a complex query
in an environment in CommonSens. We end this section by referring to a small user study that
we have performed in our labs.
The user interface is GUI based and is designed with simplicity and intuition in mind. Even
though the current implementation only allows the application programmer to create the environment in 2D (see Figure 5.7), it shows the possibilities of creating the environment. Currently,
the application programmer has to write the complex queries outside of CommonSens and obtain them through pushing the ‘Open Query’ button. Therefore, future work consists of supporting query writing through the CommonSens implementation and to allow auto-completion
of the queries, i.e., letting CommonSens suggest capabilities, values and LoIs based on the
availability in the repositories. When the environment is chosen, the sensors are placed, and the
complex query is chosen, the application programmer can choose if he wants to run simulations
or monitoring. When running simulations, CommonSens pulls the virtual sensors as fast as
possible using an internal clock rather than the real point of time. Monitoring is real-time and
and uses real workload. By using the GUI to create simple movement patterns, the application
programmer can investigate if the queries work as they should or if they need to be modified.
CommonSens sends messages about the query if the parser encountered any problems.
We evaluate the simplicity of the user interface by running a small user study where five
users are requested to perform four simple tasks:
108
1. Open an environment file, a query and a movement pattern. Simulate the movement
pattern in the environment.
2. Move around the objects in the environment and toggle the coverage of the sensors. Toggling will show the real coverage area.
3. Change the size and rotation of some of the objects in the environment.
4. Save the environment after working with it.
The response is positive and the test users report that the GUI is simple and intuitive. The
first task is performed without any problems. On the other hand, the users report some limitations on how to move the objects and it is hard to see which object is chosen. In addition, the
animation of the movement pattern is not shown. These issues are easy to improve.
6.4 Discussion and Conclusion
We have evaluated the three claims that we made in Chapter 1.
Claim 1: CommonSens Detects Complex Events and Deviations
We divide the evaluation of Claim 1 in three parts: The first part is a functionality test of five language constructs. The query language is used to describe events, and if each of these language
constructs work as designed, CommonSens manages to detect complex events and deviations.
We combine different environment instances and synthetic workloads with complex queries that
address combinations of language constructs. Timing is very important in CommonSens and the
related language construct is indirectly evaluated in every complex query. The workloads either
match the complex queries, deviate from the complex queries or do not start the evaluation of
the complex query at all. We compare the results from 182 functionality tests to expected result
and show that all tests completed successfully.
Since we evaluate Claim 1 by running measurements on a set of functionality tests, we
can not conclude more than what the results show, but by combining the language constructs
we evaluate a representative selection of complex events. An analytical approach would have
probably shown that CommonSens manages to detect all types of complex events and deviations
[Jai91] which the implementation supports.
The second part of the evaluation of Claim 1 is to evaluate if CommonSens manages to
detect complex events from real sensors. This includes an evaluation of the LoI approximation,
which shows that radio based sensors can return unexpected results since they send signals
through walls. It is also important that the approximation for each LoI is unique, i.e., that two
or more LoIs are not approximated by two equivalent sets of sensors. Despite these experiences,
CommonSens manages to detect the complex events correctly.
In the third part we show that CommonSens manages to read trace files from related work
and detect complex events and deviations from these sets. This is a feature that is important
when we want to acid-test CommonSens and compare it with other automated home care systems.
109
Claim 2: CommonSens Processes Data Tuples in Near Real-Time
We evaluate the near real-time support query processor by running two separate tests. First, we
increase the number of sensors that approximate the LoIs and show that CommonSens manages
to handle input from several sensors while it still detects complex events in near real-time.
Second, we increase the number of concurrent queries. The results show that the increasing
number of concurrent queries does not affect the processing time significantly.
Claim 3: CommonSens simplifies the work for the application programmer and provides
personalisation
Simplicity is hard to measure, but we support our claim by performing a user test in our labs.
We evaluate the personalisation by first showing a use-case involving complex events like fall
detection and medication taking. Second we indicate through a real-world experiment that
CommonSens manages to automatically adapt to different environment instances. Finally, the
user test confirms that the user interface is simple and intuitive. We discuss the user interface and
compare it to the requirements from the application programmer and show that the requirements
are met.
Based on the evaluation of CommonSens, our conclusion is that our three claims are sufficiently evaluated and supported.
110
Chapter 7
Conclusion
In this chapter we conclude this thesis and summarise our contributions. In addition, we present
a critical review of our claims. We finally point out open problems and directions regarding
future work.
7.1 Summary of Contributions
Based on our overall goal of simplifying the work for the application programmer, we have
modelled, designed, implemented and evaluated CommonSens, a multimodal complex event
processing system for automated home care. Automated home care is an emerging application
domain, and there exist many approaches that aim to solve the issues related to this domain.
For instance, there exist proprietary solutions that use sensors to detect ADLs in the home, and
there exist simpler solutions where single sensors report incidents that they are programmed to
detect. An example of the latter is an accelerometer that detects whether the monitored person
has fallen. If the monitored person falls, the sensor reacts because the accelerometer values
reach a predefined threshold. Through the chapter concerning background material and related
work we have presented the most relevant work in the field. In addition, we have presented the
technologies that CommonSens relies on; sensor technology and complex event processing. To
the best of our knowledge, there exist no systems that provide solutions for the issues that we
have addressed in this thesis.
A worst case scenario for the application programmer is to manually write queries for all
the monitored persons, who might have homes that are equipped with different types of sensors.
One very important aspect to consider is the fact that there are many similarities between the
instances. In many instances, fall detection is something which is important to detect, hence, in
the worst case scenario, the same type of query has to be written several times. In order to avoid
addressing all types of sensors that detect falls, the query language in CommonSens allows
the application programmer to address the capability of the sensors instead. CommonSens
automatically investigates a virtual instance of the home and binds the query to the sensors
that provide the capabilities. Hence, in this thesis we have shown that it is possible to provide
an open and extensible automated home care system which still simplifies the work for the
application programmer.
Throughout our work we have made important contributions to the automated home care
111
application domain. The first contributions in our work are models for events, environments
and sensors. In Chapter 3 we introduce the three models. The event model differs between
atomic and complex events and introduces the concept of LoIs, i.e., the spatial properties of
events. In contrast to related work, we have an explicitly defined event model; events are only
those states and state transitions that someone has declared interest in. The event model is
simple. Still, it covers the aspects that we are interested in, i.e., determining if states or state
transitions match conditions in queries. We divide the sensor model into three types of sensors.
The physical sensor obtains state values from the environment, the external source provides
stored data, and the logical sensor aggregates data tuples from a set of other sensors. For the
application programmer, the sensors are addressed through their capabilities. This means that
when the application programmer writes queries, the sensors are never addressed directly. This
allows for abstract queries that can apply to many different homes.
The properties of the environment model define the physical shapes of the objects that make
up a home. In addition, the properties contain information about how the objects affect signals.
Our second contribution is to let the application programmer use the environment properties in
a proactive way when placing the sensors in the home. In contrast to related work, we operate
with sensor coverage areas that are reduced by the environment. This makes sensor placement
more realistic. First, we show how to model that the permeability values of the objects affect
the signal strength. Second, we show how CommonSens uses the reduced coverage areas to
calculate the probability for false positives. Third, the instantiation of the queries is the process where the capabilities that are used in the queries are bound to the sensors in the home.
Fourth, we show how CommonSens does this by populating the two sets I SEC and N O I SEC.
Note that, in order for the sensor placement to be correct, it is required that the environment
is sufficiently instantiated and that the signals are modelled with realistic propagation models.
This is especially relevant for sensors using radio signals, e.g. RFID reader/tag pairs.
Through the query language, the application programmer can write queries that address
complex events. Through our third contribution, i.e., deviation detection, the query language
allows the application programmer to write queries that instruct CommonSens to detect deviations. When the ADLs deviate, CommonSens should report that something is wrong. This is a
simpler approach than using queries to describe everything that can go wrong in the home. The
application programmer simply has to query the expected ADLs. When the monitored person
does not follow these rules, CommonSens interprets this as a deviation.
CommonSens operates in a life cycle that consists of using the models for sensor placement,
query instantiation and query processing and evaluation. When there are needs for change, e.g.
that the monitored person needs more monitoring, CommonSens enters a system shut down
phase before the life cycle continues. This cyclic approach allows CommonSens to be extended
when this is required.
7.2 Critical Review of Claims
We have developed a set of new concepts and models. This set is the foundation for the design
and implementation of CommonSens. The implementation is used in a set of experiments in
order to evaluate whether we achieve our claims. Through the evaluation we use simulations
112
based on synthetic workload and trace files. We also use real-world experiments with real
sensors and real-time CEP. Our claims are as follows:
Claim 1: CommonSens detects complex events and deviations
In order to support this claim, we have used CEP and combined concepts from this technology with our models. This is especially related to the introduction of concepts like our query
language, coverage areas, LoIs and approximation through I SEC and N O I SEC sets. All these
concepts are domain specific and related to detection of events and deviations in the home. First,
in order to evaluate that the query language can be used to support our claim, we systematically
design tests for each query language construct. The tests use workloads that both match and do
not match the queries. With this approach, we verify that the detection of the complex queries
is correct and that deviations are correctly detected as well. Second, we evaluate the claim by
performing experiments with real sensors. These experiments show that the complex events are
detected. However, we have not investigated the deviation detection in these experiments. In all
the experiments, coverage areas and LoI approximation are evaluated by using more than one
sensor to approximate the LoIs and to detect the events and the deviations. We show that, given
an environment with sensors, CommonSens manages to instantiate the I SEC and N O I SEC sets
correctly.
Claim 2: CommonSens processes data tuples in near real-time
In order to support this claim, we have designed CommonSens so that the number of data
tuples that have to be evaluated is minimised. This is done through a pull-based model, i.e.,
CommonSens only pulls those sensors that are relevant for the query. The relevant sensors are
identified by the data tuple selector by investigating the I SEC and N O I SEC sets. Through a set of
two separate experiments we show that CommonSens is scalable with respect to an increasing
number of sensors and queries. We show that CommonSens manages to process all queries in
near real-time. In addition, this claim is also evaluated and supported during the evaluation of
Claim 1. These experiments show that all the data tuples are processed in near-real time, as
well.
Claim 3: CommonSens simplifies the work for the application programmer and provides
personalisation
In order to support this claim we have introduced abstractions that help simplifying the work
for the application programmer. First, we provide a query language that lets the application
programmer address LoIs and capabilities. Through LoIs, the application programmer only has
to describe general spatial properties, and through the capabilities, the application programmer
does not have to address the sensors directly. This simplifies the personalisation, i.e., adapting
a query plan to the current home, sensors and monitored person. CommonSens investigates
the environment and available sensors, and instantiates the queries based on late binding. The
late binding is done automatically by finding the proper sensors based on their capabilities and
placement. The sensor placement can be done interactively, i.e., the application programmer
gets feedback from CommonSens regarding the probability for false positives. In addition, all
113
the sensors and objects can be obtained and reused from a repository. In addition to showing that
CommonSens supports these concepts, we have performed a small user study where we asked
a number of persons to use CommonSens to perform a simple task and evaluate the experience.
The response from the user study was positive, however, there is a need for more extensive user
studies to fully support our claim.
7.3 Open Problems and Future Work
Despite all our contributions and the fact that we have solved our problem statement, there are
still some interesting issues that have to be investigated further. In this section we present some
of these issues. First, we address the open problems, e.g. concepts that are yet not supported
in CommonSens. Second, we address new issues, i.e., interesting new directions based on the
achievements we have made in CommonSens.
7.3.1 Open Problems
Due to time limitation, we have focused on implementing the core functionality to evaluate our
claims. In order to provide a fully fledged product, there is still some functionality to remains to
be implemented. For instance, only one complex query can run at the same time. Note that this
does not involve concurrency; the concurrency classes D URING and E QUALS are implemented,
and form a template for the implementation of the remaining concurrency classes.
As noted in Chapter 5, the support for expressions using the ∧ and ∨ operators is limited.
An open problem is to exchange the current ∧-list structure with a structure that is better for
supporting complex first-order logical expressions. During the evaluation of deviations, CommonSens was instructed to report about both successful event processing and deviations. This is
sufficient for evaluation of the event processing and the deviation detection. However, what remains to be implemented is the dev operator, so that the application programmer can explicitly
specify that deviations are of interest.
Through our experiments we have observed that there remains some work related to signal
propagation. We do not support multipath fading, which includes reflection of signals. This is
a very important issue, since most signals are reflected by objects. Some signals, like radio signals, both reflect from and pass through an object. Signal propagation is complex, and correct
models have to be implemented in order for sensors to be placed more appropriately. For the
sensor placement, CommonSens does not yet automatically approximate the LoIs. The sensor
placement has to be done by the application programmer through the CommonSens GUI. Automatic sensor placement requires cost factors like available sensors and price, since a perfect
approximation of a LoI will use a considerable number of sensors, which is not practical in a
real scenario.
Finally, we need to define capabilities more precisely, i.e., we need to find a data model
that can describe the properties of capabilities. Currently we use a simple text based approach.
However, there is need for more sophisticated data structures to address the capabilities more
sufficiently. A possible approach is to investigate service oriented architecture (SOA) and see
how loosely coupled interfaces are defined to provide different kinds of services.
114
7.3.2 Future Work
Through our work, we have found two issues that need to be further investigated and which are
based on the current state of CommonSens.
First, we assume that all the readings from the sensors are correct. In the real-world, this
is not a realistic assumption. Sensors are unreliable, and in order to obtain correct readings,
there are several operations that have to be done. For instance, it is important to identify when
the quality of information is sufficient. This means that even though the information is not
complete, the missing readings from the sensors can be constructed by performing calculations
on the readings that we already have. In CommonSens we use several sensors to approximate
LoIs, but sensor fusion, i.e., using the information from sensors that cover the same areas in
the environment, can also increase the quality of information, which indicates how good the
information from the sensors is compared to the ground truth. The ground truth is defined by
the states and state transitions in the real-world.
Second, it would be interesting to use CommonSens as an actuator. We have not discussed
how CommonSens should behave when events and deviations are detected, and currently the
only action performed by CommonSens is to send simple notifications. This can be extended,
and by using different types of actions, we can let CommonSens start other processes in the
home. For instance, we can use the concept of capabilities and logical sensors to define complex actuators. An example of a complex actuator is how a notification can be adapted to the
environment. The application programmer only has to state that he wants the notification to
be a wake up call. CommonSens investigates the environment and finds an alarm clock which
is located in the bedroom, and automatically binds the notification to this alarm. The alarm is
started at a point of time that is defined in a query. Actuators can also be used as part of robot
technology, i.e., a robot uses sensors and performs actions based on a set of queries. Therefore,
since it is designed and modelled for automated home care, CommonSens can be extended and
used as a component in future home care robots as well.
115
Bibliography
[ACG+ 04]
Arvind Arasu, Mitch Cherniack, Eduardo Galvez, David Maier, Anurag S.
Maskey, Esther Ryvkina, Michael Stonebraker, and Richard Tibbetts. Linear
road: a stream data management benchmark. In Proceedings of the Thirtieth
international conference on Very large data bases - Volume 30, VLDB ’04, pages
480–491. VLDB Endowment, 2004.
[Agg05]
Charu C. Aggarwal. On abnormality detection in spuriously populated data
streams. In SDM: SIAM International Conference on Data Mining, 2005.
[AH00]
Ron Avnur and Joseph M. Hellerstein. Eddies: continuously adaptive query processing. SIGMOD Rec., 29:261–272, May 2000.
[AKJ06]
Pradeep Kumar Atrey, Mohan S. Kankanhalli, and Ramesh Jain. Information
assimilation framework for event detection in multimedia surveillance systems.
Multimedia Systems, 12(3):239–253, 2006.
[All83]
James F. Allen. Maintaining knowledge about temporal intervals. Commun. ACM,
26(11):832–843, 1983.
[ASSC02]
I.F. Akyildiz, Weilian Su, Y. Sankarasubramaniam, and E. Cayirci. A survey on
sensor networks. Communications Magazine, IEEE, 40(8):102 – 114, August
2002.
[Atr09]
Pradeep K. Atrey. A hierarchical model for representation of events in multimedia
observation systems. In EiMM ’09: Proceedings of the 1st ACM international
workshop on Events in multimedia, pages 57–64, New York, NY, USA, 2009.
ACM.
[BF07]
Azzedine Boukerche and Xin Fei. A coverage-preserving scheme for wireless
sensor network with irregular sensing range. Ad Hoc Netw., 5(8):1303–1316,
2007.
[Bor79]
Richard Bornat. Understanding and writing compilers: a do-it-yourself guide.
Macmillan Publishing Co., Inc., London and Basingstoke, 1979.
[BP00]
P. Bahl and V.N. Padmanabhan. Radar: an in-building rf-based user location and
tracking system. In INFOCOM 2000. Nineteenth Annual Joint Conference of the
IEEE Computer and Communications Societies. Proceedings. IEEE, volume 2,
pages 775–784 vol.2, 2000.
117
[BR07]
Mike Botts and Alexandre Robin. OpenGIS Sensor Model Language (SensorML)
Implementation Specification. Open Geospatial Consortium Inc., 2007.
[Car07]
Jan Carlson. Event Pattern Detection for Embedded Systems. PhD thesis, Department of Computer Science and Electronics, M�lardalen University, 2007.
[CBK09]
Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A
survey. ACM Comput. Surv., 41(3):1–58, 2009.
[CCC+ 05]
Yi-Chao Chen, Ji-Rung Chiang, Hao-hua Chu, Polly Huang, and Arvin Wen Tsui.
Sensor-assisted wi-fi indoor location system for adapting to environmental dynamics. In MSWiM ’05: Proceedings of the 8th ACM international symposium
on Modeling, analysis and simulation of wireless and mobile systems, pages 118–
125, New York, NY, USA, 2005. ACM.
[CCD+ 03]
Sirish Chandrasekaran, Owen Cooper, Amol Deshpande, Michael J. Franklin,
Joseph M. Hellerstein, Wei Hong, Sailesh Krishnamurthy, Samuel R. Madden,
Fred Reiss, and Mehul A. Shah. Telegraphcq: continuous dataflow processing. In
SIGMOD ’03: Proceedings of the 2003 ACM SIGMOD international conference
on Management of data, pages 668–668, New York, NY, USA, 2003. ACM.
[CcR+ 03]
Don Carney, Uğur Çetintemel, Alex Rasin, Stan Zdonik, Mitch Cherniack, and
Mike Stonebraker. Operator scheduling in a data stream manager. In Proceedings
of the 29th international conference on Very large data bases - Volume 29, VLDB
’2003, pages 838–849. VLDB Endowment, 2003.
[con]
Contiki - the operating system for connecting the next billion devices - the internet
of things. http://www.sics.se/contiki/.
[CSE09]
D. J. Cook and M. Schmitter-Edgecombe. Assessing the quality of activities in a
smart environment. Methods of Information in Medicine, 2009.
[CTX09]
Yuanyuan Cao, Linmi Tao, and Guangyou Xu. An event-driven context model
in elderly health monitoring. Ubiquitous, Autonomic and Trusted Computing,
Symposia and Workshops on, 0:120–124, 2009.
[DGH+ 06]
Alan Demers, Johannes Gehrke, Mingsheng Hong, Mirek Riedewald, and Walker
White. Towards expressive publish/subscribe systems. In Advances in Database
Technology - EDBT 2006, volume 3896/2006, pages 627–644. Springer Berlin /
Heidelberg, 2006.
[DGP+ 07]
Alan J. Demers, Johannes Gehrke, Biswanath Panda, Mirek Riedewald, Varun
Sharma, and Walker M. White. Cayuga: A general purpose event monitoring
system. In CIDR, pages 412–422. www.crdrdb.org, 2007.
[DLT+ 07]
Min Ding, Fang Liu, Andrew Thaeler, Dechang Chen, and Xiuzhen Cheng. Faulttolerant target localization in sensor networks. EURASIP J. Wirel. Commun.
Netw., 2007(1):19–19, 2007.
118
[Dus02]
Elfriede Dustin. Effective Software Testing: 50 Ways to Improve Your Software
Testing. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA,
2002.
[EN10]
Opher Etzion and Peter Niblett. Event Processing in Action. Manning Publications Co., August 2010.
[esp]
Esper - complex event processing homepage. http://esper.codehaus.
org/.
[evi]
Evidence - embedding technology. http://www.evidence.eu.com/.
[GADI08]
D. Gyllstrom, J. Agrawal, Yanlei Diao, and N. Immerman. On supporting kleene
closure over event streams. Data Engineering, 2008. ICDE 2008. IEEE 24th
International Conference on, pages 1391–1393, April 2008.
[GIM+ 10]
Daniel Giusto, Antonio Iera, Giacomo Morabito, Luigi Atzori, Luca Bencini,
Giovanni Collodi, Davide Palma, Antonio Manes, and Gianfranco Manes. A
real implementation and deployment for wine production management based on
wireless sensor network technology. In The Internet of Things, pages 339–348.
Springer New York, 2010. 10.1007/978-1-4419-1674-7_33.
[GMUW08] Hector Garcia-Molina, Jeffrey D. Ullman, and Jennifer Widom. Database Systems: The Complete Book. Prentice Hall Press, Upper Saddle River, NJ, USA,
2008.
[gpc]
General polygon clipper library. http://www.cs.man.ac.uk/~toby/
alan/software/.
[HJ08]
P. Hafliger and E. Johannessen. Analog to interval encoder with active use of gate
leakage for an implanted blood-sugar sensor. In Biomedical Circuits and Systems
Conference, 2008. BioCAS 2008. IEEE, pages 169 –172, 2008.
[HT03]
Chi-Fu Huang and Yu-Chee Tseng. The coverage problem in a wireless sensor
network. In WSNA ’03: Proceedings of the 2nd ACM international conference on
Wireless sensor networks and applications, pages 115–121, New York, NY, USA,
2003. ACM.
[iee]
Ieee 802.15 working group for wpan. http://www.ieee802.org/15/.
[ILB+ 05]
Stephen S. Intille, Kent Larson, J. S. Beaudin, J. Nawyn, E. Munguia Tapia, and
P. Kaushik. A living laboratory for the design and evaluation of ubiquitous computing technologies. In CHI ’05: CHI ’05 extended abstracts on Human factors
in computing systems, pages 1941–1944, New York, NY, USA, 2005. ACM.
[Jai91]
Rai Jain. The Art of Computer Systems Performance Analysis. John Wiley &
Sons, Inc., 1991.
119
[KH09]
Kevin Kinsella and Wan He. An aging world: 2008. international population
reports. Issued June 2009. U.S. Department of Health and Human Services, 2009.
[KTD+ 03]
Kimberle Koile, Konrad Tollmar, David Demirdjian, Howard Shrobe, and Trevor
Darrell. Activity zones for context-aware computing. In In UbiComp, pages 90–
106. Springer-Verlag, 2003.
[LAT05]
Toril Laberg, Haakon Aspelund, and Hilde Thygesen. SMART HOME TECHNOLOGY: Planning and management in municipal services. Norwegian Directorate for Social and Health Affairs, the Delta Centre, 2005.
[LC07]
Dik Lun Lee and Qiuxia Chen. A model-based wifi localization method. In InfoScale ’07: Proceedings of the 2nd international conference on Scalable information systems, pages 1–7, ICST, Brussels, Belgium, Belgium, 2007. ICST (Institute
for Computer Sciences, Social-Informatics and Telecommunications Engineering).
[LF98]
David C. Luckham and Brian Frasca. Complex event processing in distributed
systems. Technical Report CSL-TR-98-754, Stanford University Technical Report, 1998.
[LKA+ 95]
David C. Luckham, John J. Kenney, Larry M. Augustin, James Vera, Walter
Mann, Walter Mann, Doug Bryan, and Walter Mann. Specification and analysis
of system architecture using rapide. IEEE Transactions on Software Engineering,
21(4):336–355, 1995.
[Luc01]
David C. Luckham. The Power of Events: An Introduction to Complex Event Processing in Distributed Enterprise Systems. Addison-Wesley Longman Publishing
Co., Inc., Boston, MA, USA, 2001.
[mav]
Mavhome - managing an adaptive versatile home.
edu/mavhome/.
http://ailab.wsu.
[MFHH05] Samuel R. Madden, Michael J. Franklin, Joseph M. Hellerstein, and Wei Hong.
Tinydb: an acquisitional query processing system for sensor networks. ACM
Trans. Database Syst., 30(1):122–173, 2005.
[MJAC08]
Jie Mao, John Jannotti, Mert Akdere, and Ugur Cetintemel. Event-based constraints for sensornet programming. In DEBS ’08: Proceedings of the second international conference on Distributed event-based systems, pages 103–113, New
York, NY, USA, 2008. ACM.
[ML08]
Marilyn Rose McGee-Lennon. Requirements engineering for home care technology. In CHI ’08: Proceeding of the twenty-sixth annual SIGCHI conference on
Human factors in computing systems, pages 1439–1442, New York, NY, USA,
2008. ACM.
120
[MWKK04] Christopher A. Miller, Peggy Wu, Kathleen Krichbaum, and Liana Kiff. Automated elder home care:long term adaptive aiding and support we can live with. In
AAAI Spring Symposium on Interaction between Humans and Autonomous Systems over Extended Operation, 2004.
[NBW07]
Usman Naeem, John Bigham, and Jinfu Wang. Recognising activities of daily
life using hierarchical plans. In Smart Sensing and Context. Springer, 2007.
[ope]
Opencv homepage. http://opencv.willowgarage.com.
[O’R87]
Joseph O’Rourke. Art gallery theorems and algorithms. Oxford University Press,
Inc., New York, NY, USA, 1987.
[PBV+ 09]
Kyungseo Park, Eric Becker, Jyothi K. Vinjumur, Zhengyi Le, and Fillia Makedon. Human behavioral detection and data cleaning in assisted living environment
using wireless sensor networks. In Proceedings of the 2nd International Conference on Pervasive Technologies Related to Assistive Environments, PETRA ’09,
pages 7:1–7:8, New York, NY, USA, 2009. ACM.
[PP07]
Animesh Patcha and Jung-Min Park. An overview of anomaly detection techniques: Existing solutions and latest technological trends. Comput. Netw.,
51:3448–3470, August 2007.
[PS06]
Kostas Patroumpas and Timos Sellis. Window specification over data streams.
Advances in Database Technology - EDBT 2006, 4254:445–464, 2006.
[QZWL07] Ying Qiao, Kang Zhong, HongAn Wang, and Xiang Li. Developing eventcondition-action rules in real-time active database. In SAC ’07: Proceedings of
the 2007 ACM symposium on Applied computing, pages 511–516, New York, NY,
USA, 2007. ACM.
[Ree79]
Trygve Reenskaug. Thing-model-view-editor, an example from a planningsystem. Technical report, Xerox PARC, May 1979.
[RGJ09]
Setareh Rafatirad, Amarnath Gupta, and Ramesh Jain. Event composition operators: Eco. In EiMM ’09: Proceedings of the 1st ACM international workshop on
Events in multimedia, pages 65–72, New York, NY, USA, 2009. ACM.
[RH09]
Sean Reilly and Mads Haahr. Extending the event-based programming model to
support sensor-driven ubiquitous computing applications. Pervasive Computing
and Communications, IEEE International Conference on, 0:1–6, 2009.
[SBR09]
Holger Storf, Martin Becker, and Martin Riedl. Rule-based activity recognition
framework: Challenges, technique and learning. In Pervasive Computing Technologies for Healthcare, 2009. PervasiveHealth 2009. 3rd International Conference on, pages 1 –7, 1-3 2009.
121
[SFSS09]
Ansgar Scherp, Thomas Franz, Carsten Saathoff, and Steffen Staab. F–a model
of events based on the foundational ontology dolce+dns ultralight. In K-CAP ’09:
Proceedings of the fifth international conference on Knowledge capture, pages
137–144, New York, NY, USA, 2009. ACM.
[SGP08]
Jarle Søberg, Vera Goebel, and Thomas Plagemann. To happen or not to happen: towards an open distributed complex event processing system. In MDS ’08:
Proceedings of the 5th Middleware doctoral symposium, pages 25–30, New York,
NY, USA, 2008. ACM.
[SGP10a]
Jarle Søberg, Vera Goebel, and Thomas Plagemann. Commonsens: Personalisation of complex event processing in automated homecare. In The Sixth International Conference on Intelligent Sensors, Sensor Networks and Information
Processing (ISSNIP), pages 275–280, December 2010.
[SGP10b]
Jarle Søberg, Vera Goebel, and Thomas Plagemann. Detection of spatial events in
commonsens. In Proceedings of the 2nd ACM international workshop on Events
in multimedia, EiMM ’10, pages 53–58, New York, NY, USA, 2010. ACM.
[SGP11]
Jarle Søberg, Vera Goebel, and Thomas Plagemann. Deviation detection in automated home care using commonsens. In Workshop on Smart Environments to
Enhance Health Care (SmartE), pages 668–673, March 2011.
[SH09]
I. Skog and P. Handel. In-car positioning and navigation technologies - a survey.
Intelligent Transportation Systems, IEEE Transactions on, 10(1):4 –21, 2009.
[SK09]
M. Saini and M. Kankanhalli. Context-based multimedia sensor selection method.
In Advanced Video and Signal Based Surveillance, 2009. AVSS ’09. Sixth IEEE
International Conference on, pages 262 –267, 2009.
[SKJ09]
Mukesh Saini, Mohan Kankanhalli, and Ramesh Jain. A flexible surveillance system architecture. In Proceedings of the 2009 Sixth IEEE International Conference
on Advanced Video and Signal Based Surveillance, AVSS ’09, pages 571–576,
Washington, DC, USA, 2009. IEEE Computer Society.
[SR92]
S.Y. Seidel and T.S. Rappaport. 914 mhz path loss prediction models for indoor
wireless communications in multifloored buildings. Antennas and Propagation,
IEEE Transactions on, 40(2):207–217, Feb 1992.
[SSGP07]
Katrine Stemland Skjelsvik, Jarle Søberg, Vera Goebel, and Thomas Plagemann.
Using continuous queries for event filtering and routing in sparse manets. In FTDCS ’07: Proceedings of the 11th IEEE International Workshop on Future Trends
of Distributed Computing Systems, pages 138–148, Washington, DC, USA, 2007.
IEEE Computer Society.
[SSS10]
Sinan Sen, Nenad Stojanovic, and Ljiljana Stojanovic. An approach for iterative
event pattern recommendation. In DEBS ’10: Proceedings of the Fourth ACM
International Conference on Distributed Event-Based Systems, pages 196–205,
New York, NY, USA, 2010. ACM.
122
[TBGNC09] Tarik Taleb, Dario Bottazzi, Mohsen Guizani, and Hammadi Nait-Charif. Angelah: A framework for assisting elders at home. IEEE Journal on Selected Areas
in Communications, 27(4):480–494, May 2009.
[tin]
Tinyos homepage. http://www.tinyos.net/.
[TPS+ 05]
Gilman Tolle, Joseph Polastre, Robert Szewczyk, David Culler, Neil Turner,
Kevin Tu, Stephen Burgess, Todd Dawson, Phil Buonadonna, David Gay, and
Wei Hong. A macroscope in the redwoods. In Proceedings of the 3rd international conference on Embedded networked sensor systems, SenSys ’05, pages
51–63, New York, NY, USA, 2005. ACM.
[WDR06]
Eugene Wu, Yanlei Diao, and Shariq Rizvi. High-performance complex event
processing over streams. In SIGMOD. ACM, 2006.
[WF06]
Phillip Ian Wilson and John Fernandez. Facial feature detection using haar classifiers. J. Comput. Small Coll., 21:127–133, April 2006.
[WJ07]
U. Westermann and R. Jain. Toward a common event model for multimedia applications. Multimedia, IEEE, 14(1):19–29, Jan.-March 2007.
[WRGD07] Walker White, Mirek Riedewald, Johannes Gehrke, and Alan Demers. What is
"Next" in Event Processing? In PODS ’07: Proceedings of the twenty-sixth ACM
SIGMOD-SIGACT-SIGART symposium on Principles of database systems. ACM,
2007.
[WSM07]
Ståle Walderhaug, Erlend Stav, and Marius Mikalsen. The mpower tool chain enabling rapid development of standards-based and interoperable homecare applications. In Norwegian Informatics Conference, 2007.
[WT08]
Feng Wang and Kenneth J. Turner. Towards personalised home care systems.
In PETRA ’08: Proceedings of the 1st international conference on PErvasive
Technologies Related to Assistive Environments, pages 1–7, New York, NY, USA,
2008. ACM.
[xbo]
Crossbox technology inc. homepage. http://www.xbow.com.
[XZH+ 05]
Zhe Xiang, Hangjin Zhang, Jian Huang, Song Song, and Kevin C. Almeroth.
A hidden environment model for constructing indoor radio maps. A World of
Wireless, Mobile and Multimedia Networks, International Symposium on, 1:395–
400, 2005.
[YG03]
Y. Yao and J. E. Gehrke. Query processing for sensor networks. In Proceedings of
the 2003 Conference on Innovative Data Systems Resea rch (CIDR 2003), January
2003.
[YYC06]
Li-Hsing Yen, Chang Wu Yu, and Yang-Min Cheng. Expected k-coverage in
wireless sensor networks. Ad Hoc Networks, 4(5):636 – 650, 2006.
[zig]
Zigbee alliance. http://www.zigbee.org/.
123
Appendix A
Appendix
A.1 calculateError
/**
* C a l c u l a t e s t h e area o f a l l t h e i n t e r s e c t i o n s o f t h e s e n s o r s t h a t co ver
* a L o I i n t h e A t o m i c Q u e r y tmpAQuery . I f t h e L o I == n u l l , a l l t h e s e n s o r s
* i n t h e e n v i r o n m e n t t h a t p r o v i d e t h e c a p a b i l i t y h a v e t o be i n c l u d e d .
*/
p u b l i c v o i d c a l c u l a t e E r r o r ( Ato m icQu er y tmpAQuery ) {
L o c a t i o n O f I n t e r e s t tmpLoI = tmpAQuery . g e t L o i ( ) ;
C a p a b i l i t y c a p a b i l i t y = tmpAQuery . g e t C a p a b i l i t y ( ) ;
A r r a y L i s t < S e n s o r > i s e c = new A r r a y L i s t < S e n s o r > ( ) ;
A r r a y L i s t < S e n s o r > n o I s e c = new A r r a y L i s t < S e n s o r > ( ) ;
Poly i n t e r s e c t i o n = n u l l ;
/ / Obtain a l l the t u p l e sources t h a t provide the c a p a b i l i t y .
Ar r ay List <Sensor > p r o v i d e s C a p a b i l i t y = f i n d S e n s o r ( c a p a b i l i t y ) ;
i f ( p r o v i d e s C a p a b i l i t y . isEmpty ( ) ) {
return ;
}
/ / Add t h e s e n s o r s t h a t h a v e an i n t e r s e c t i o n w i t h L o I .
i f ( tmpLoI ! = n u l l ) {
for ( Sensor tmpSource : p r o v i d e s C a p a b i l i t y ) {
i f ( tmpSource i n s t a n c e o f P h y s i c a l S e n s o r ) {
P h y s i c a l S e n s o r tm p Sen s = ( P h y s i c a l S e n s o r ) t m p S o u r c e ;
i n t e r s e c t i o n = tmpLoI . g e t S h a p e ( ) . g e t P o l y ( ) . i n t e r s e c t i o n (
tm p Sen s . g e t P o l y R e d u c e d ( t h i s , tm p Sen s
. getSignalType ( ) ) ) ;
i f ( i n t e r s e c t i o n . g e t A r e a ( ) == tmpLoI . g e t S h a p e ( ) . g e t P o l y ( )
. getArea ( ) ) {
i s e c . add ( tm p Sen s ) ;
}
125
}
}
/ / Create the a c t u a l i n t e r s e c t i o n .
i f ( ! i s e c . isEmpty ( ) ) {
i n t e r s e c t i o n = ( ( P h y s i c a l S e n s o r ) i s e c . get ( 0 ) ) . getPolyReduced (
this , (( PhysicalSensor ) isec . get ( 0 ) ) . getSignalType ( ) ) ;
f o r ( i n t i = 1 ; i < i s e c . s i z e ( ) ; i ++) {
i n t e r s e c t i o n = (( PhysicalSensor ) isec . get ( i ))
. getPolyReduced (
this ,
(( PhysicalSensor ) isec . get ( i ))
. getSignalType ( ) ) . i n t e r s e c t i o n (
intersection );
}
/ / Add t h e s e n s o r s t h a t h a v e an i n t e r s e c t i o n w i t h t h e
/ / " i n t e r s e c t i o n " b u t n o t t h e LoI ( n o I s e c ) .
for ( Sensor tmpSource : p r o v i d e s C a p a b i l i t y ) {
i f ( tmpSource i n s t a n c e o f P h y s i c a l S e n s o r ) {
P h y s i c a l S e n s o r tm p Sen s = ( P h y s i c a l S e n s o r ) t m p S o u r c e ;
i f ( tm p Sen s . g e t I s C a p a b i l i t y P r o v i d e d ( c a p a b i l i t y
. getName ( ) ) ) {
P o l y t m p I n t e r s e c t i o n = tm p Sen s . g e t P o l y R e d u c e d ( t h i s ,
tm p Sen s . g e t S i g n a l T y p e ( ) ) . i n t e r s e c t i o n (
intersection );
P o l y t e s t I n t e r s e c t i o n = tm p Sen s . g e t P o l y R e d u c e d (
t h i s , tm p Sen s . g e t S i g n a l T y p e ( ) )
. i n t e r s e c t i o n ( tmpLoI . g e t S h a p e ( ) . g e t P o l y ( ) ) ;
i f ( t m p I n t e r s e c t i o n . g e t A r e a ( ) != 0
&& t e s t I n t e r s e c t i o n . g e t A r e a ( ) == 0 ) {
/ / The L o I i s n o t c o v e r e d , b u t t h e
// intersection is .
n o I s e c . add ( tm p Sen s ) ;
}
}
}
}
/ / Run an XOR on t h e i n t e r s e c t i o n and t h e e l e m e n t s i n
/ / noIsec .
f o r ( S e n s o r tmpTS : n o I s e c ) {
P h y s i c a l S e n s o r tm p Sen s = ( P h y s i c a l S e n s o r ) tmpTS ;
P o l y t m p I n t e r s e c t i o n = i n t e r s e c t i o n . i n t e r s e c t i o n ( tm p Sen s
. g e t P o l y R e d u c e d ( t h i s , tm p Sen s . g e t S i g n a l T y p e ( ) ) ) ;
i n t e r s e c t i o n = i n t e r s e c t i o n . xor ( t m p In t e r s e c t i on ) ;
}
tmpAQuery . s e t L o I A p p r o x ( i n t e r s e c t i o n ) ;
126
tmpAQuery . s e t F P P r o b ( tmpLoI . g e t S h a p e ( ) . g e t P o l y ( ) . g e t A r e a ( )
/ i n t e r s e c t i o n . getArea ( ) ) ;
tmpAQuery . s e t I s e c ( i s e c ) ;
tmpAQuery . s e t N o I s e c ( n o I s e c ) ;
} e l s e / * i s e c was emp ty . * / {
tmpAQuery . s e t L o I A p p r o x ( n u l l ) ;
tmpAQuery . s e t F P P r o b ( 0 ) ;
}
} e l s e / * tmp Lo I == n u l l * / {
tmpAQuery . s e t L o I A p p r o x ( n u l l ) ;
tmpAQuery . s e t F P P r o b ( 0 ) ;
tmpAQuery . s e t I s e c ( p r o v i d e s C a p a b i l i t y ) ;
}
}
A.2 reduceRay
/**
* R e d u c e s t h e r a y due t o o b j e c t s t h e r a y m e e t s . C u r r e n t l y i t u s e s t h e
* Ding a l g o r i t h m .
*
* @param tmpEnv
* @param s i g n a l T y p e
* @retu rn The new b o u n d a r y t r i p l e
*/
p u b l i c T r i p l e r e d u c e R a y ( E n v i r o n m e n t tmpEnv , S i g n a l T y p e s i g n a l T y p e ) {
i n t d0 = CommonSens . ANTENNA_LENGTH;
i n t r a n g e = n umElements ;
do uble p0 = CommonSens . INITIAL_SIGNAL_STRENGTH;
do uble m = CommonSens . THRESHOLD ;
do uble b e t a A i r = Math . l o g ( p0 / m)
/ Math . l o g ( ( ( do uble ) r a n g e ) / ( do uble ) d0 ) ;
do uble s t r e n g t h = p0 ;
int i = 0;
int r = 0;
do uble p r ev Per m = b e t a A i r ;
do uble c u r r P e r m = b e t a A i r ;
w h i l e ( s t r e n g t h > m && r < r a n g e ) {
c u r r P e r m = perm ( tmpEnv , t r i p l e s . g e t ( r ) . g e t T r i p l e ( ) , s i g n a l T y p e ) ;
i f ( p r ev Per m ! = c u r r P e r m ) {
p0 = s t r e n g t h ;
i = d0 ;
p r ev Per m = c u r r P e r m ;
}
s t r e n g t h = s t r e n g t h ( tmpEnv , s i g n a l T y p e , d0 , p0 , r , i ) ;
r += 1 ;
i += 1 ;
}
127
i f ( r == r a n g e )
return t r i p l e s . get ( range − 1 ) . g e t T r i p l e ( ) ;
return t r i p l e s . get ( r ) . g e t T r i p l e ( ) ;
}
A.3 Functionality Tests Configuration
Evaluates a complex query with no LoIs and no temporal properties. Since no LoIs are
addressed, all the sensors in the environment instance that provide the capability have to
report a match. This should only happen in e2.env, since all the sensors cover the same
area in the environment. At once all the sensors report a match, the complex query evaluation stops. Language construct I is evaluated.
Test number Environment Movement pattern Complex query
Expected result
1
e1.env
m1.mov
cq1.qry
6
2
e2.env
m2.mov
cq1.qry
0
3
e2.env
m3.mov
cq1.qry
6
Evaluates Language construct I, but the complex query addresses LoI3. This means that
only the sensors that approximate LoI3 are included in the instantiated complex query.
Since the temporal properties are not defined, the complex query is matched at once the
sensors in the approximation report correct data tuples.
Test number Environment Movement pattern Complex query
Expected result
4
e1.env
m1.mov
cq2.qry
0
5
e1.env
m2.mov
cq2.qry
6
6
e1.env
m4.mov
cq2.qry
6
Evaluates Language construct I by letting tb = 5 epochs. This means that the complex
query is δ-timed and all the data tuples have to match the condition at once the evaluation
has started. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
7
e1.env
m5.mov
cq3.qry
0
8
e1.env
m6.mov
cq3.qry
0
9
e1.env
m7.mov
cq3.qry
3
Evaluates Language construct I by letting tb = 1 and te = 6. This means that the complex query is timed and between epochs 1 and 6 and all the data tuples have to match the
condition between these two timestamps. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
10
e1.env
m5.mov
cq4.qry
0
11
e1.env
m6.mov
cq4.qry
0
12
e1.env
m7.mov
cq4.qry
3
13
e1.env
m3.mov
cq4.qry
3
Evaluates Language constructs I and II by letting tb = 5 epochs. The P -registration is set
to min 25%. LoI3 is addressed.
128
Test number Environment Movement pattern
14
e1.env
m5.mov
15
e1.env
m6.mov
16
e1.env
m8.mov
17
e1.env
m7.mov
18
e1.env
m3.mov
Complex query
cq5.qry
cq5.qry
cq5.qry
cq5.qry
cq5.qry
Expected result
0
0
0
3
6
Evaluates Language constructs I and II by letting tb = 1 and te = 6. This means that the
complex query is timed and between epochs 1 and 6 and minimum 25% of the data tuples
have to match the condition. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
19
e1.env
m5.mov
cq6.qry
0
20
e1.env
m6.mov
cq6.qry
0
21
e1.env
m8.mov
cq6.qry
0
22
e1.env
m7.mov
cq6.qry
3
23
e1.env
m3.mov
cq6.qry
3
Evaluates Language constructs I and II by letting tb = 5 epochs. The P -registration is set
to max 25%. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
24
e1.env
m5.mov
cq7.qry
3
25
e1.env
m6.mov
cq7.qry
3
26
e1.env
m8.mov
cq7.qry
3
27
e1.env
m7.mov
cq7.qry
0
28
e1.env
m3.mov
cq7.qry
6
Evaluates Language constructs I and II by letting tb = 1 and te = 6. The P -registration is
set to max 25%. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
29
e1.env
m5.mov
cq8.qry
3
30
e1.env
m6.mov
cq8.qry
3
31
e1.env
m8.mov
cq8.qry
3
32
e1.env
m7.mov
cq8.qry
0
33
e1.env
m3.mov
cq8.qry
0
Evaluates Language constructs I and III. The logical operator ∧ is used between two atomic
queries that are similar. Note that we can do this since CommonSens does not optimise the
queries and does not know about this similarity. It simply evaluates both atomic queries as
if they were different. Also note that ∧ requires that both data tuple sequences are similar
(Figure 5.13).
129
Test number Environment Movement pattern
34
e1.env
m1.mov
35
e2.env
m2.mov
36
e2.env
m3.mov
Complex query
cq9.qry
cq9.qry
cq9.qry
Expected result
6
0
6
Evaluates Language constructs I and III. The logical operator ∧ is used between two atomic
queries that are similar. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
37
e1.env
m1.mov
cq10.qry
0
38
e1.env
m2.mov
cq10.qry
6
39
e1.env
m4.mov
cq10.qry
6
Evaluates Language constructs I and III. The logical operator ∧ is used between two atomic
queries that are similar. The duration is 5 epochs. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
40
e1.env
m5.mov
cq11.qry
0
41
e1.env
m6.mov
cq11.qry
0
42
e1.env
m7.mov
cq11.qry
3
Evaluates Language constructs I and III by letting tb = 1 and te = 6. The logical operator ∧
is used between two atomic queries that are similar. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
43
e1.env
m5.mov
cq12.qry
0
44
e1.env
m6.mov
cq12.qry
0
45
e1.env
m7.mov
cq12.qry
3
46
e1.env
m3.mov
cq12.qry
3
Evaluates Language constructs I, II and III by letting tb = 5 epochs. The P -registration is
set to min 25%. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
47
e1.env
m5.mov
cq13.qry
0
48
e1.env
m6.mov
cq13.qry
0
49
e1.env
m8.mov
cq13.qry
0
50
e1.env
m7.mov
cq13.qry
3
51
e1.env
m3.mov
cq13.qry
6
Evaluates Language constructs I, II and III by letting tb = 1 and te = 6. The P -registration
is set to min 25%. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
52
e1.env
m5.mov
cq14.qry
0
53
e1.env
m6.mov
cq14.qry
0
54
e1.env
m8.mov
cq14.qry
0
55
e1.env
m7.mov
cq14.qry
3
56
e1.env
m3.mov
cq14.qry
3
Evaluates Language constructs I, II and III by letting tb = 5 epochs. The P -registration is
set to max 25%. LoI3 is addressed.
130
Test number Environment Movement pattern
57
e1.env
m5.mov
58
e1.env
m6.mov
59
e1.env
m8.mov
60
e1.env
m7.mov
61
e1.env
m3.mov
Complex query
cq15.qry
cq15.qry
cq15.qry
cq15.qry
cq15.qry
Expected result
3
3
3
0
6
Evaluates Language constructs I, II and III by letting tb = 1 and te = 6. The P -registration
is set to max 25%. LoI3 is addressed.
Test number Environment Movement pattern Complex query
Expected result
62
e1.env
m5.mov
cq16.qry
3
63
e1.env
m6.mov
cq16.qry
3
64
e1.env
m8.mov
cq16.qry
3
65
e1.env
m7.mov
cq16.qry
0
66
e1.env
m3.mov
cq16.qry
0
Evaluates Language constructs I and III. The logical operator ∧ is used between two atomic
queries that are similar. LoI3 and LoI8 are addressed.
Test number Environment Movement pattern Complex query
Expected result
67
e3.env
m1.mov
cq17.qry
0
68
e3.env
m2.mov
cq17.qry
6
69
e3.env
m4.mov
cq17.qry
6
Evaluates Language constructs I and III. The logical operator ∧ is used between two atomic
queries that are similar. The duration is 5 epochs. LoI3 and LoI8 are addressed.
Test number Environment Movement pattern Complex query
Expected result
70
e3.env
m5.mov
cq18.qry
0
71
e3.env
m6.mov
cq18.qry
0
72
e3.env
m7.mov
cq18.qry
3
Evaluates Language constructs I and III by letting tb = 1 and te = 6. The logical operator ∧
is used between two atomic queries that are similar. LoI3 and LoI8 are addressed.
Test number Environment Movement pattern Complex query
Expected result
73
e3.env
m5.mov
cq19.qry
0
74
e3.env
m6.mov
cq19.qry
0
75
e3.env
m7.mov
cq19.qry
3
76
e3.env
m3.mov
cq19.qry
3
Evaluates Language constructs I, II and III by letting tb = 5 epochs. The P -registration is
set to min 25%. LoI3 and LoI4 are addressed.
Test number Environment Movement pattern Complex query
Expected result
77
e3.env
m5.mov
cq20.qry
0
78
e3.env
m6.mov
cq20.qry
0
79
e3.env
m8.mov
cq20.qry
0
80
e3.env
m7.mov
cq20.qry
3
131
81
e3.env
m3.mov
cq20.qry
6
Evaluates Language constructs I, II and III by letting tb = 1 and te = 6. The P -registration
is set to min 25%. LoI3 and LoI8 are addressed.
Test number Environment Movement pattern Complex query
Expected result
82
e3.env
m5.mov
cq21.qry
0
83
e3.env
m6.mov
cq21.qry
0
84
e3.env
m8.mov
cq21.qry
0
85
e3.env
m7.mov
cq21.qry
3
86
e3.env
m3.mov
cq21.qry
3
Evaluates Language constructs I, II and III by letting tb = 5 epochs. The P -registration is
set to max 25%. LoI3 and LoI8 are addressed.
Test number Environment Movement pattern Complex query
Expected result
87
e3.env
m5.mov
cq22.qry
3
88
e3.env
m6.mov
cq22.qry
3
89
e3.env
m8.mov
cq22.qry
3
90
e3.env
m7.mov
cq22.qry
0
91
e3.env
m3.mov
cq22.qry
6
Evaluates Language constructs I, II and III by letting tb = 1 and te = 6. The P -registration
is set to max 25%. LoI3 and LoI8 are addressed.
Test number Environment Movement pattern Complex query
Expected result
92
e3.env
m5.mov
cq23.qry
3
93
e3.env
m6.mov
cq23.qry
3
94
e3.env
m8.mov
cq23.qry
3
95
e3.env
m7.mov
cq23.qry
0
96
e3.env
m3.mov
cq23.qry
0
Evaluates Language constructs I and III. Uses ∨ between atomic queries that address LoI1
to LoI4. The monitored person only needs to move to one of the LoIs.
Test number Environment Movement pattern Complex query
Expected result
97
e1.env
m9.mov
cq24.qry
0
98
e1.env
m10.mov
cq24.qry
0
99
e1.env
m11.mov
cq24.qry
0
100
e1.env
m12.mov
cq24.qry
0
Evaluates Language constructs I and III. Uses ∨ between pairs of atomic queries that are
related with ∧. The complex queries address LoI1, LoI3, LoI7 and LoI8.
Test number Environment Movement pattern Complex query
Expected result
101
e3.env
m9.mov
cq25.qry
0
102
e3.env
m9.mov
cq26.qry
3
Evaluates Language constructs I, II and III. Uses ∨ between pairs of atomic queries that
are related with ∧. The complex queries address LoI1, LoI3, LoI7 and LoI8.
132
Test number Environment Movement pattern Complex query Expected result
103
e3.env
m5.mov
cq27.qry
0
104
e3.env
m6.mov
cq27.qry
0
105
e3.env
m7.mov
cq27.qry
3
106
e3.env
m8.mov
cq27.qry
3
107
e3.env
m13.mov
cq27.qry
0
108
e3.env
m14.mov
cq27.qry
0
109
e3.env
m15.mov
cq27.qry
3
110
e3.env
m16.mov
cq27.qry
3
111
e3.env
m3.mov
cq27.qry
3
112
e3.env
m5.mov
cq28.qry
0
113
e3.env
m6.mov
cq28.qry
0
114
e3.env
m8.mov
cq28.qry
0
115
e3.env
m7.mov
cq28.qry
3
116
e3.env
m13.mov
cq28.qry
0
117
e3.env
m14.mov
cq28.qry
0
118
e3.env
m15.mov
cq28.qry
0
119
e3.env
m16.mov
cq28.qry
3
120
e3.env
m3.mov
cq28.qry
6
121
e3.env
m5.mov
cq29.qry
0
122
e3.env
m6.mov
cq29.qry
0
123
e3.env
m8.mov
cq29.qry
0
124
e3.env
m7.mov
cq29.qry
3
125
e3.env
m13.mov
cq29.qry
0
126
e3.env
m14.mov
cq29.qry
0
127
e3.env
m15.mov
cq29.qry
0
128
e3.env
m16.mov
cq29.qry
3
129
e3.env
m3.mov
cq29.qry
3
130
e3.env
m5.mov
cq30.qry
3
131
e3.env
m6.mov
cq30.qry
3
132
e3.env
m8.mov
cq30.qry
3
133
e3.env
m7.mov
cq30.qry
0
134
e3.env
m13.mov
cq30.qry
3
135
e3.env
m14.mov
cq30.qry
3
136
e3.env
m15.mov
cq30.qry
3
137
e3.env
m16.mov
cq30.qry
0
138
e3.env
m3.mov
cq30.qry
6
139
e3.env
m5.mov
cq31.qry
3
140
e3.env
m6.mov
cq31.qry
3
141
e3.env
m8.mov
cq31.qry
3
133
142
143
144
145
146
147
e3.env
e3.env
e3.env
e3.env
e3.env
e3.env
m7.mov
m13.mov
m14.mov
m15.mov
m16.mov
m3.mov
cq31.qry
cq31.qry
cq31.qry
cq31.qry
cq31.qry
cq31.qry
0
3
3
3
0
0
Evaluates Language constructs I, II and III. At least one of the ∧-lists are δ-timed or timed.
Test number Environment Movement pattern Complex query
Expected result
148
e3.env
m5.mov
cq32.qry
0
149
e3.env
m6.mov
cq32.qry
0
150
e3.env
m8.mov
cq32.qry
3
151
e3.env
m7.mov
cq32.qry
3
152
e3.env
m13.mov
cq32.qry
0
153
e3.env
m14.mov
cq32.qry
0
154
e3.env
m15.mov
cq32.qry
3
155
e3.env
m16.mov
cq32.qry
3
156
e3.env
m3.mov
cq32.qry
3
157
e3.env
m17.mov
cq32.qry
0
158
e1.env
m18.mov
cq33.qry
0
159
e1.env
m19.mov
cq33.qry
3
Evaluates Language constructs I and IV.
Test number Environment Movement pattern
160
e1.env
m19.mov
161
e1.env
m20.mov
162
e1.env
m21.mov
163
e1.env
m20.mov
164
e1.env
m21.mov
165
e1.env
m22.mov
Complex query
cq34.qry
cq35.qry
cq35.qry
cq36.qry
cq36.qry
cq36.qry
Expected result
0
0
3
0
3
3
Evaluates Language constructs I and III. The logical operator is ¬.
Test number Environment Movement pattern Complex query
166
e1.env
m1.mov
cq37.qry
167
e1.env
m1.mov
cq38.qry
Expected result
0
0
Evaluates Language constructs I and V. The concurrency class is E QUALS.
Test number Environment Movement pattern Complex query
Expected result
168
e3.env
m1.mov
cq39.qry
0
169
e3.env
m1.mov
cq40.qry
7
Evaluates Language constructs I and V. The concurrency class is D URING.
Test number Environment Movement pattern Complex query
Expected result
170
e1.env
m23.mov
cq41.qry
0
171
e1.env
m24.mov
cq41.qry
7
134
172
173
e1.env
e1.env
m31.mov
m30.mov
cq47.qry
cq47.qry
7
0
Evaluates Language construct I with focus on timing of complex queries.
Test number Environment Movement pattern Complex query
Expected result
174
e1.env
m23.mov
cq42.qry
8
175
e1.env
m23.mov
cq43.qry
8
176
e1.env
m23.mov
cq44.qry
0
Evaluates Language constructs I and IV with combination of timing and δ-timing.
Test number Environment Movement pattern Complex query
Expected result
177
e1.env
m12.mov
cq45.qry
3
178
e1.env
m25.mov
cq46.qry
0
179
e1.env
m26.mov
cq46.qry
3
180
e1.env
m27.mov
cq46.qry
3
181
e1.env
m28.mov
cq46.qry
3
182
e1.env
m29.mov
cq46.qry
3
Table A.1: Regression tests.
Test number Average Minimum Maximum End time
1
0.15
0
0.67
6
2
0.38
0
2
2
3
0.09
0
0.5
12
4
0.35
0
2.33
2
5
0.07
0
0.5
6
6
0.07
0
0.67
3
7
0.32
0
0.83
5
8
0.34
0
1.17
5
9
0.26
0
1
5
10
0.29
0
1.5
5
11
0.29
0
0.83
5
12
0.23
0
0.67
5
13
0.16
0
0.71
6
14
0.3
0
2
5
15
0.29
0
1
5
16
0.25
0
1
5
17
0.25
0
0.83
5
18
0.04
0
0.33
12
19
0.27
0
1.17
5
20
0.3
0
0.83
5
21
0.22
0
0.5
5
22
0.23
0
0.67
5
135
Result
True (6 = 6)
True (0 = 0)
True (6 = 6)
True (0 = 0)
True (6 = 6)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
Filename
cq1.qry
cq2.qry
cq3.qry
cq4.qry
cq5.qry
cq6.qry
cq7.qry
cq8.qry
cq9.qry
cq10.qry
cq11.qry
cq12.qry
cq13.qry
cq14.qry
cq15.qry
cq16.qry
cq17.qry
cq18.qry
cq19.qry
cq20.qry
cq21.qry
cq22.qry
cq23.qry
Complex query
[(DetectPerson==Person1)]
[(DetectPerson==Person1, LoI3)]
[(DetectPerson==Person1, LoI3, 5)]
[(DetectPerson==Person1, LoI3, 1, 6)]
[(DetectPerson==Person1, LoI3, 5, min 25%)]
[(DetectPerson==Person1, LoI3, 1, 6, min 25%)]
[(DetectPerson==Person1, LoI3, 5, max 25%)]
[(DetectPerson==Person1, LoI3, 1, 6, max 25%)]
[(DetectPerson==Person1) && (DetectPerson==Person1)]
[(DetectPerson==Person1, LoI3) && (DetectPerson==Person1, LoI3)]
[(DetectPerson==Person1, LoI3, 5) && (DetectPerson==Person1, LoI3, 5)]
[(DetectPerson==Person1, LoI3, 1, 6) && (DetectPerson==Person1, LoI3, 1, 6)]
[(DetectPerson==Person1, LoI3, 5, min 25%) && (DetectPerson==Person1, LoI3, 5,
[(DetectPerson==Person1, LoI3, 1, 6, min 25%) && (DetectPerson==Person1, LoI3,
[(DetectPerson==Person1, LoI3, 5, max 25%) && (DetectPerson==Person1, LoI3, 5,
[(DetectPerson==Person1, LoI3, 1, 6, max 25%) && (DetectPerson==Person1, LoI3,
[(DetectPerson==Person1, LoI3) && (DetectPerson==Person1, LoI8)]
[(DetectPerson==Person1, LoI4, 5) && (DetectPerson==Person1, LoI3, 5)]
[(DetectPerson==Person1, LoI3, 1, 6) && (DetectPerson==Person1, LoI8, 1, 6)]
[(DetectPerson==Person1, LoI3, 5, min 25%) && (DetectPerson==Person1, LoI8, 5,
[(DetectPerson==Person1, LoI3, 1, 6, min 25%) && (DetectPerson==Person1, LoI8,
[(DetectPerson==Person1, LoI8, 5, max 25%) && (DetectPerson==Person1, LoI3, 5,
[(DetectPerson==Person1, LoI3, 1, 6, max 25%) && (DetectPerson==Person1, LoI8,
Table A.2: Complex queries cq1.qry to cq23.qry, which are used in the regression tests.
min 25%)]
1, 6, min 25%)]
max 25%)]
1, 6, max 25%)]
min 25%)]
1, 6, min 25%)]
max 25%)]
1, 6, max 25%)]
136
137
cq34.qry
cq33.qry
cq32.qry
cq31.qry
cq30.qry
cq29.qry
cq28.qry
cq27.qry
cq26.qry
cq25.qry
Filename
cq24.qry
Table A.3: Complex queries cq24.qry to cq34.qry, which are used in the regression tests.
[(DetectPerson==Person1, LoI1) || (DetectPerson==Person1, LoI2) ||
(DetectPerson==Person1, LoI3) || (DetectPerson==Person1, LoI4)]
[(DetectPerson==Person1, LoI1) && (DetectPerson==Person1, LoI7) ||
(DetectPerson==Person1, LoI3) && (DetectPerson==Person1, LoI8)]
[(DetectPerson==Person1, LoI4, 5) && (DetectPerson==Person1, LoI3, 5) ||
(DetectPerson==Person1, LoI1, 5) && (DetectPerson==Person1, LoI2, 5)]
[(DetectPerson==Person1, LoI1, 1, 6) && (DetectPerson==Person1, LoI7, 1, 6) ||
(DetectPerson==Person1, LoI3, 1, 6) && (DetectPerson==Person1, LoI8, 1, 6)]
[(DetectPerson==Person1, LoI3, 5, min 25%) &&
(DetectPerson==Person1, LoI8, 5, min 25%) ||
(DetectPerson==Person1, LoI1, 5, min 25%) &&
(DetectPerson==Person1, LoI7, 5, min 25%)]
[(DetectPerson==Person1, LoI1, 1, 6, min 25%) &&
(DetectPerson==Person1, LoI7, 1, 6, min 25%) ||
(DetectPerson==Person1, LoI3, 1, 6, min 25%) &&
(DetectPerson==Person1, LoI8, 1, 6, min 25%)]
[(DetectPerson==Person1, LoI1, 5, max 25%) &&
(DetectPerson==Person1, LoI7, 5, max 25%) ||
(DetectPerson==Person1, LoI8, 5, max 25%) &&
(DetectPerson==Person1, LoI3, 5, max 25%)]
[(DetectPerson==Person1, LoI3, 1, 6, max 25%) &&
(DetectPerson==Person1, LoI8, 1, 6, max 25%) ||
(DetectPerson==Person1, LoI1, 1, 6, max 25%) &&
(DetectPerson==Person1, LoI7, 1, 6, max 25%)]
[(DetectPerson==Person1, LoI3, 5) ||
(DetectPerson==Person1, LoI1, 4, 5)]
[(DetectPerson==Person1, LoI1, 5) ||
(DetectPerson==Person1, LoI2, 5) || (DetectPerson==Person1, LoI3, 5) ||
(DetectPerson==Person1, LoI4, 5)]
[(DetectPerson==Person1, LoI3) -> (DetectPerson==Person1, LoI4) ->
(DetectPerson==Person1, LoI1) -> (DetectPerson==Person1, LoI2)]
Complex query
Filename
cq35.qry
cq36.qry
cq37.qry
cq38.qry
cq39.qry
cq40.qry
cq41.qry
cq42.qry
cq43.qry
cq44.qry
cq45.qry
cq46.qry
cq47.qry
Complex query
[(DetectPerson==Person1, LoI3, 3) -> (DetectPerson==Person1, LoI4, 3) ->
(DetectPerson==Person1, LoI1, 3) -> (DetectPerson==Person1, LoI2, 3)]
[(DetectPerson==Person1, LoI3, 1, 3) -> (DetectPerson==Person1, LoI4, 5, 7) ->
(DetectPerson==Person1, LoI1, 8, 10) -> (DetectPerson==Person1, LoI2, 11, 13)]
[!(DetectPerson==Person1)]
[(DetectPerson==Person1, LoI3) && !(DetectPerson==Person1, LoI4)]
[equals([(DetectPerson==Person1, LoI3)] , [(DetectPerson==Person1, LoI8)])]
[equals([(DetectPerson==Person1, LoI3)] , [(DetectPerson==Person1, LoI1)])]
[during([(DetectPerson==Person1, LoI5)] , [(DetectPerson==Person1, LoI6, 5)])]
[(DetectPerson==Person1, LoI2) -> (DetectPerson==Person1, LoI4), 1, 5, max 100%]
[(DetectPerson==Person1, LoI2) -> (DetectPerson==Person1, LoI4), 0, 5, max 100%]
[(DetectPerson==Person1, LoI2), 0, 5, max 100%]
[(DetectPerson==Person1, LoI2, 1, max 100%) ->
(DetectPerson==Person1, LoI4, 5, 7, max 100%)]
[(DetectPerson==Person1, LoI1, 10, max 50%) ->
(DetectPerson==Person1, LoI2, 10, min 50%) ->
(DetectPerson==Person1, LoI3, 21, 30, max 50%) ->
(DetectPerson==Person1, LoI4, 31, 40, min 50%)]
[during([(DetectPerson==Person1, LoI5)] , [(DetectPerson==Person1, LoI6)])]
Table A.4: Complex queries cq35.qry to cq74.qry, which are used in the regression tests.
138
Test number Average Minimum Maximum End time
23
0.14
0
0.57
6
24
0.38
0
1
2
25
0.5
0
4
2
26
0.42
0
1.67
2
27
0.18
0
0.67
5
28
0.04
0
0.17
12
29
0.45
0
1.67
2
30
0.4
0
1.33
2
31
0.45
0
1.33
2
32
0.2
0
0.83
5
33
0.14
0
1.29
6
34
0.15
0
0.5
6
35
0.41
0
1.67
2
36
0.05
0
0.17
12
37
0.34
0
2
2
38
0.04
0
0.5
6
39
0.04
0
0.33
3
40
0.34
0
1.17
5
41
0.31
0
0.67
5
42
0.26
0
2
5
43
0.28
0
0.67
5
44
0.27
0
0.83
5
45
0.23
0
0.67
5
46
0.15
0
0.43
6
47
0.29
0
0.67
5
48
0.32
0
3.5
5
49
0.27
0
0.67
5
50
0.22
0
2
5
51
0.05
0
0.33
12
52
0.28
0
0.67
5
53
0.3
0
0.67
5
54
0.27
0
1
5
55
0.23
0
0.67
5
56
0.15
0
0.43
6
57
0.46
0
2.33
2
58
0.45
0
1.67
2
59
0.43
0
1.67
2
60
0.17
0
1.17
5
61
0.06
0
0.42
12
62
0.46
0
1.67
2
139
Result
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (6 = 6)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (6 = 6)
True (0 = 0)
True (6 = 6)
True (0 = 0)
True (6 = 6)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (6 = 6)
True (3 = 3)
Test number Average Minimum Maximum End time
63
0.48
0
1.33
2
64
0.47
0
1.67
2
65
0.18
0
0.67
5
66
0.11
0
0.71
6
67
0.29
0
1.33
2
68
0.08
0
0.5
6
69
0.06
0
0.67
3
70
0.37
0
0.83
5
71
0.38
0
3.17
5
72
0.24
0
0.67
5
73
0.35
0
1
5
74
0.37
0
1.17
5
75
0.26
0
0.83
5
76
0.15
0
0.57
6
77
0.4
0
2.33
5
78
0.35
0
0.83
5
79
0.29
0
0.67
5
80
0.25
0
1.33
5
81
0.06
0
0.25
12
82
0.36
0
1.5
5
83
0.38
0
8
5
84
0.31
0
3
5
85
0.26
0
0.83
5
86
0.17
0
0.43
6
87
0.64
0
12
2
88
0.55
0
1.67
2
89
0.55
0
2.67
2
90
0.17
0
0.5
5
91
0.06
0
0.25
12
92
0.5
0
1.33
2
93
0.46
0
1
2
94
0.45
0
1.67
2
95
0.21
0
3.33
5
96
0.12
0
0.57
6
97
0.49
0
2
1
98
0.44
0
3.5
1
99
0.41
0
3.5
1
100
0.4
0
1
1
101
0.46
0
1.5
1
102
0.27
0
0.67
5
140
Result
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (6 = 6)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (6 = 6)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
Test number Average Minimum Maximum End time
103
0.34
0
0.67
5
104
0.37
0
0.83
5
105
0.23
0
0.5
5
106
0.32
0
0.67
5
107
0.38
0
0.83
5
108
0.38
0
0.83
5
109
0.34
0
0.67
5
110
0.22
0
0.5
5
111
0.19
0
0.57
6
112
0.39
0
0.83
5
113
0.39
0
1
5
114
0.27
0
0.67
5
115
0.26
0
0.67
5
116
0.38
0
0.83
5
117
0.37
0
1
5
118
0.33
0
1.5
5
119
0.25
0
0.67
5
120
0.05
0
0.25
12
121
0.39
0
0.83
5
122
0.36
0
0.67
5
123
0.28
0
0.67
5
124
0.27
0
0.5
5
125
0.37
0
0.83
5
126
0.36
0
0.83
5
127
0.29
0
0.67
5
128
0.25
0
0.67
5
129
0.21
0
1.43
6
130
0.53
0
1.33
2
131
0.5
0
1
2
132
0.52
0.33
1
2
133
0.21
0
0.5
5
134
0.56
0.33
5.33
2
135
0.53
0.33
1
2
136
0.5
0
1
2
137
0.2
0
0.5
5
138
0.05
0
0.25
12
139
0.54
0.33
4.67
2
140
0.52
0.33
1.33
2
141
0.56
0
2.67
2
142
0.23
0
0.5
5
141
Result
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (6 = 6)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (6 = 6)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
Test number Average Minimum Maximum End time
143
0.49
0.33
1
2
144
0.48
0
1.67
2
145
0.5
0
1
2
146
0.21
0
0.5
5
147
0.15
0
0.43
6
148
0.31
0
1.33
5
149
0.36
0
1
5
150
0.33
0
0.67
5
151
0.29
0
0.83
5
152
0.24
0
1.6
4
153
0.2
0
0.6
4
154
0.22
0
1.17
5
155
0.19
0
1.67
5
156
0.19
0
0.5
5
157
0.27
0
0.6
4
158
0.29
0
0.67
8
159
0.32
0.11
1.89
8
160
0.53
0
1.6
4
161
0.32
0
0.62
12
162
0.35
0
0.75
3
163
0.28
0.08
0.54
12
164
0.29
0
0.57
6
165
0.31
0.1
0.7
9
166
0.63
0
3
0
167
0.26
0
1
2
168
0.53
0
2.33
2
169
0.22
0.17
0.67
6
170
0.68
0.38
1.25
7
171
0.61
0.25
0.92
12
172
0.74
0.53
1.1
40
173
1.01
0.61
1.48
30
174
0.21
0
0.67
9
175
0.17
0
0.44
9
176
0.64
0
4
0
177
0.17
0
0.71
6
178
0.18
0.05
0.32
39
179
0.37
0
0.8
4
180
0.16
0
0.3
19
181
0.22
0.08
0.38
25
182
0.16
0.05
0.34
40
142
Result
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (0 = 0)
True (0 = 0)
True (0 = 0)
True (7 = 7)
True (0 = 0)
True (6 = 6)
True (7 = 7)
True (0 = 0)
True (8 = 8)
True (8 = 8)
True (0 = 0)
True (3 = 3)
True (0 = 0)
True (3 = 3)
True (3 = 3)
True (3 = 3)
True (3 = 3)
Test number Average Minimum Maximum End time Result
Table A.5: Regression test results.
A.3.1 Data Files from the Last Experiment in Section 6.1.2
The data files show the data tuples from the hallway experiment. The first attribute is the timestamp when the data tuple arrived. The second attribute is the sensor. The third and fourth
attributes show the capability and value, and the two last attributes show the tb and te timestamps. All the timestamps are in milliseconds.
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
228000
229000
232000
234000
236000
239000
241000
243000
245000
247000
250000
252000
255000
256000
257000
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
228000
229000
232000
234000
236000
239000
241000
243000
245000
247000
250000
252000
255000
256000
257000
143
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
228000
229000
232000
234000
236000
239000
241000
243000
245000
247000
250000
252000
255000
256000
257000
258000
260000
261000
262000
265000
267000
270000
272000
275000
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
Cam1_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false 258000 258000
false 260000 260000
false 261000 261000
false 262000 262000
false 265000 265000
false 267000 267000
false 270000 270000
false 272000 272000
true 275000 275000
1000 Cam2_0 DetectMotion false 1000 1000
2000 Cam2_0 DetectMotion false 2000 2000
6000 Cam2_0 DetectMotion false 6000 6000
7000 Cam2_0 DetectMotion false 7000 7000
8000 Cam2_0 DetectMotion false 8000 8000
10000 Cam2_0 DetectMotion false 10000 10000
12000 Cam2_0 DetectMotion false 12000 12000
14000 Cam2_0 DetectMotion false 14000 14000
17000 Cam2_0 DetectMotion false 17000 17000
19000 Cam2_0 DetectMotion false 19000 19000
22000 Cam2_0 DetectMotion false 22000 22000
26000 Cam2_0 DetectMotion false 26000 26000
28000 Cam2_0 DetectMotion false 28000 28000
30000 Cam2_0 DetectMotion false 30000 30000
32000 Cam2_0 DetectMotion false 32000 32000
34000 Cam2_0 DetectMotion false 34000 34000
36000 Cam2_0 DetectMotion false 36000 36000
39000 Cam2_0 DetectMotion true 39000 39000
41000 Cam2_0 DetectMotion true 41000 41000
43000 Cam2_0 DetectMotion true 43000 43000
46000 Cam2_0 DetectMotion true 46000 46000
49000 Cam2_0 DetectMotion true 49000 49000
51000 Cam2_0 DetectMotion true 51000 51000
54000 Cam2_0 DetectMotion false 54000 54000
56000 Cam2_0 DetectMotion false 56000 56000
59000 Cam2_0 DetectMotion false 59000 59000
61000 Cam2_0 DetectMotion true 61000 61000
63000 Cam2_0 DetectMotion true 63000 63000
66000 Cam2_0 DetectMotion true 66000 66000
68000 Cam2_0 DetectMotion true 68000 68000
70000 Cam2_0 DetectMotion false 70000 70000
73000 Cam2_0 DetectMotion false 73000 73000
75000 Cam2_0 DetectMotion false 75000 75000
77000 Cam2_0 DetectMotion false 77000 77000
80000 Cam2_0 DetectMotion false 80000 80000
82000 Cam2_0 DetectMotion false 82000 82000
85000 Cam2_0 DetectMotion false 85000 85000
86000 Cam2_0 DetectMotion false 86000 86000
89000 Cam2_0 DetectMotion false 89000 89000
90000 Cam2_0 DetectMotion false 90000 90000
91000 Cam2_0 DetectMotion false 91000 91000
93000 Cam2_0 DetectMotion false 93000 93000
96000 Cam2_0 DetectMotion false 96000 96000
144
98000 Cam2_0 DetectMotion false 98000 98000
100000 Cam2_0 DetectMotion false 100000 100000
102000 Cam2_0 DetectMotion true 102000 102000
104000 Cam2_0 DetectMotion false 104000 104000
108000 Cam2_0 DetectMotion false 108000 108000
111000 Cam2_0 DetectMotion false 111000 111000
115000 Cam2_0 DetectMotion false 115000 115000
119000 Cam2_0 DetectMotion false 119000 119000
121000 Cam2_0 DetectMotion false 121000 121000
123000 Cam2_0 DetectMotion false 123000 123000
126000 Cam2_0 DetectMotion false 126000 126000
128000 Cam2_0 DetectMotion true 128000 128000
131000 Cam2_0 DetectMotion true 131000 131000
135000 Cam2_0 DetectMotion true 135000 135000
138000 Cam2_0 DetectMotion true 138000 138000
142000 Cam2_0 DetectMotion true 142000 142000
145000 Cam2_0 DetectMotion false 145000 145000
149000 Cam2_0 DetectMotion false 149000 149000
152000 Cam2_0 DetectMotion false 152000 152000
156000 Cam2_0 DetectMotion false 156000 156000
161000 Cam2_0 DetectMotion true 161000 161000
165000 Cam2_0 DetectMotion true 165000 165000
170000 Cam2_0 DetectMotion true 170000 170000
174000 Cam2_0 DetectMotion true 174000 174000
179000 Cam2_0 DetectMotion true 179000 179000
184000 Cam2_0 DetectMotion false 184000 184000
189000 Cam2_0 DetectMotion false 189000 189000
193000 Cam2_0 DetectMotion false 193000 193000
196000 Cam2_0 DetectMotion false 196000 196000
200000 Cam2_0 DetectMotion false 200000 200000
201000 Cam2_0 DetectMotion false 201000 201000
203000 Cam2_0 DetectMotion false 203000 203000
207000 Cam2_0 DetectMotion false 207000 207000
212000 Cam2_0 DetectMotion false 212000 212000
216000 Cam2_0 DetectMotion false 216000 216000
220000 Cam2_0 DetectMotion false 220000 220000
222000 Cam2_0 DetectMotion true 222000 222000
224000 Cam2_0 DetectMotion true 224000 224000
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false
false
false
false
false
false
false
false
false
false
false
false
false
false
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
145
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
Cam3_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false
false
false
false
false
false
false
false
false
false
false
false
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
Cam4_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false 131000 131000
false 135000 135000
false 138000 138000
false 142000 142000
false 145000 145000
false 149000 149000
false 152000 152000
false 156000 156000
false 161000 161000
true 165000 165000
true 170000 170000
true 174000 174000
false 179000 179000
146
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
228000
229000
232000
234000
236000
239000
241000
243000
245000
247000
250000
252000
255000
256000
257000
258000
260000
261000
262000
265000
267000
270000
272000
275000
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
Cam5_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false 184000 184000
false 189000 189000
false 193000 193000
false 196000 196000
false 200000 200000
false 201000 201000
false 203000 203000
false 207000 207000
false 212000 212000
false 216000 216000
false 220000 220000
false 222000 222000
true 224000 224000
true 228000 228000
true 229000 229000
true 232000 232000
true 234000 234000
true 236000 236000
true 239000 239000
true 241000 241000
true 243000 243000
true 245000 245000
false 247000 247000
false 250000 250000
false 252000 252000
false 255000 255000
false 256000 256000
false 257000 257000
false 258000 258000
false 260000 260000
false 261000 261000
false 262000 262000
false 265000 265000
false 267000 267000
false 270000 270000
true 272000 272000
true 275000 275000
131000
135000
138000
142000
145000
149000
152000
156000
161000
165000
170000
174000
179000
184000
189000
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false 131000 131000
false 135000 135000
false 138000 138000
false 142000 142000
false 145000 145000
false 149000 149000
false 152000 152000
false 156000 156000
false 161000 161000
true 165000 165000
true 170000 170000
true 174000 174000
false 179000 179000
false 184000 184000
false 189000 189000
147
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
228000
229000
232000
234000
236000
239000
241000
243000
245000
247000
250000
252000
255000
256000
257000
258000
260000
261000
262000
265000
267000
270000
272000
275000
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
Cam6_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
false 193000 193000
false 196000 196000
false 200000 200000
false 201000 201000
false 203000 203000
false 207000 207000
false 212000 212000
false 216000 216000
false 220000 220000
false 222000 222000
true 224000 224000
true 228000 228000
true 229000 229000
true 232000 232000
true 234000 234000
true 236000 236000
true 239000 239000
true 241000 241000
true 243000 243000
false 245000 245000
false 247000 247000
false 250000 250000
false 252000 252000
false 255000 255000
false 256000 256000
false 257000 257000
false 258000 258000
false 260000 260000
false 261000 261000
false 262000 262000
false 265000 265000
false 267000 267000
false 270000 270000
true 272000 272000
true 275000 275000
1000 Cam7_0 DetectMotion false 1000 1000
2000 Cam7_0 DetectMotion false 2000 2000
6000 Cam7_0 DetectMotion false 6000 6000
7000 Cam7_0 DetectMotion false 7000 7000
8000 Cam7_0 DetectMotion false 8000 8000
10000 Cam7_0 DetectMotion false 10000 10000
12000 Cam7_0 DetectMotion false 12000 12000
14000 Cam7_0 DetectMotion false 14000 14000
17000 Cam7_0 DetectMotion false 17000 17000
19000 Cam7_0 DetectMotion false 19000 19000
22000 Cam7_0 DetectMotion false 22000 22000
26000 Cam7_0 DetectMotion false 26000 26000
28000 Cam7_0 DetectMotion true 28000 28000
30000 Cam7_0 DetectMotion true 30000 30000
32000 Cam7_0 DetectMotion true 32000 32000
34000 Cam7_0 DetectMotion true 34000 34000
36000 Cam7_0 DetectMotion true 36000 36000
148
39000 Cam7_0 DetectMotion true 39000 39000
41000 Cam7_0 DetectMotion true 41000 41000
43000 Cam7_0 DetectMotion true 43000 43000
46000 Cam7_0 DetectMotion false 46000 46000
49000 Cam7_0 DetectMotion false 49000 49000
51000 Cam7_0 DetectMotion false 51000 51000
54000 Cam7_0 DetectMotion false 54000 54000
56000 Cam7_0 DetectMotion false 56000 56000
59000 Cam7_0 DetectMotion false 59000 59000
61000 Cam7_0 DetectMotion false 61000 61000
63000 Cam7_0 DetectMotion false 63000 63000
66000 Cam7_0 DetectMotion false 66000 66000
68000 Cam7_0 DetectMotion true 68000 68000
70000 Cam7_0 DetectMotion true 70000 70000
73000 Cam7_0 DetectMotion true 73000 73000
75000 Cam7_0 DetectMotion false 75000 75000
77000 Cam7_0 DetectMotion false 77000 77000
80000 Cam7_0 DetectMotion false 80000 80000
82000 Cam7_0 DetectMotion false 82000 82000
85000 Cam7_0 DetectMotion false 85000 85000
86000 Cam7_0 DetectMotion true 86000 86000
89000 Cam7_0 DetectMotion true 89000 89000
90000 Cam7_0 DetectMotion true 90000 90000
91000 Cam7_0 DetectMotion true 91000 91000
93000 Cam7_0 DetectMotion true 93000 93000
96000 Cam7_0 DetectMotion true 96000 96000
98000 Cam7_0 DetectMotion true 98000 98000
100000 Cam7_0 DetectMotion true 100000 100000
102000 Cam7_0 DetectMotion true 102000 102000
104000 Cam7_0 DetectMotion false 104000 104000
108000 Cam7_0 DetectMotion false 108000 108000
111000 Cam7_0 DetectMotion false 111000 111000
115000 Cam7_0 DetectMotion false 115000 115000
119000 Cam7_0 DetectMotion false 119000 119000
121000 Cam7_0 DetectMotion false 121000 121000
123000 Cam7_0 DetectMotion true 123000 123000
126000 Cam7_0 DetectMotion true 126000 126000
128000 Cam7_0 DetectMotion true 128000 128000
1000 Cam8_0 DetectMotion false 1000 1000
2000 Cam8_0 DetectMotion false 2000 2000
6000 Cam8_0 DetectMotion false 6000 6000
7000 Cam8_0 DetectMotion false 7000 7000
8000 Cam8_0 DetectMotion false 8000 8000
10000 Cam8_0 DetectMotion false 10000 10000
12000 Cam8_0 DetectMotion false 12000 12000
14000 Cam8_0 DetectMotion false 14000 14000
17000 Cam8_0 DetectMotion false 17000 17000
19000 Cam8_0 DetectMotion false 19000 19000
22000 Cam8_0 DetectMotion false 22000 22000
26000 Cam8_0 DetectMotion false 26000 26000
28000 Cam8_0 DetectMotion true 28000 28000
30000 Cam8_0 DetectMotion true 30000 30000
149
32000 Cam8_0 DetectMotion true 32000 32000
34000 Cam8_0 DetectMotion true 34000 34000
36000 Cam8_0 DetectMotion true 36000 36000
39000 Cam8_0 DetectMotion true 39000 39000
41000 Cam8_0 DetectMotion true 41000 41000
43000 Cam8_0 DetectMotion true 43000 43000
46000 Cam8_0 DetectMotion false 46000 46000
49000 Cam8_0 DetectMotion true 49000 49000
51000 Cam8_0 DetectMotion true 51000 51000
54000 Cam8_0 DetectMotion false 54000 54000
56000 Cam8_0 DetectMotion false 56000 56000
59000 Cam8_0 DetectMotion false 59000 59000
61000 Cam8_0 DetectMotion false 61000 61000
63000 Cam8_0 DetectMotion false 63000 63000
66000 Cam8_0 DetectMotion false 66000 66000
68000 Cam8_0 DetectMotion true 68000 68000
70000 Cam8_0 DetectMotion true 70000 70000
73000 Cam8_0 DetectMotion true 73000 73000
75000 Cam8_0 DetectMotion false 75000 75000
77000 Cam8_0 DetectMotion false 77000 77000
80000 Cam8_0 DetectMotion false 80000 80000
82000 Cam8_0 DetectMotion false 82000 82000
85000 Cam8_0 DetectMotion false 85000 85000
86000 Cam8_0 DetectMotion false 86000 86000
89000 Cam8_0 DetectMotion true 89000 89000
90000 Cam8_0 DetectMotion true 90000 90000
91000 Cam8_0 DetectMotion true 91000 91000
93000 Cam8_0 DetectMotion true 93000 93000
96000 Cam8_0 DetectMotion true 96000 96000
98000 Cam8_0 DetectMotion true 98000 98000
100000 Cam8_0 DetectMotion true 100000 100000
102000 Cam8_0 DetectMotion true 102000 102000
104000 Cam8_0 DetectMotion false 104000 104000
108000 Cam8_0 DetectMotion false 108000 108000
111000 Cam8_0 DetectMotion false 111000 111000
115000 Cam8_0 DetectMotion false 115000 115000
119000 Cam8_0 DetectMotion false 119000 119000
121000 Cam8_0 DetectMotion false 121000 121000
123000 Cam8_0 DetectMotion false 123000 123000
126000 Cam8_0 DetectMotion true 126000 126000
128000 Cam8_0 DetectMotion true 128000 128000
131000 Cam8_0 DetectMotion true 131000 131000
135000 Cam8_0 DetectMotion true 135000 135000
138000 Cam8_0 DetectMotion true 138000 138000
142000 Cam8_0 DetectMotion true 142000 142000
145000 Cam8_0 DetectMotion true 145000 145000
149000 Cam8_0 DetectMotion true 149000 149000
152000 Cam8_0 DetectMotion false 152000 152000
156000 Cam8_0 DetectMotion false 156000 156000
161000 Cam8_0 DetectMotion true 161000 161000
165000 Cam8_0 DetectMotion false 165000 165000
170000 Cam8_0 DetectMotion false 170000 170000
174000 Cam8_0 DetectMotion false 174000 174000
150
179000
184000
189000
193000
196000
200000
201000
203000
207000
212000
216000
220000
222000
224000
228000
229000
232000
234000
236000
239000
241000
243000
245000
247000
250000
252000
255000
256000
257000
258000
260000
261000
262000
265000
267000
270000
272000
275000
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
Cam8_0
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
DetectMotion
true 179000 179000
true 184000 184000
false 189000 189000
false 193000 193000
false 196000 196000
false 200000 200000
false 201000 201000
false 203000 203000
false 207000 207000
false 212000 212000
false 216000 216000
true 220000 220000
true 222000 222000
true 224000 224000
false 228000 228000
false 229000 229000
false 232000 232000
false 234000 234000
false 236000 236000
false 239000 239000
false 241000 241000
false 243000 243000
true 245000 245000
true 247000 247000
true 250000 250000
false 252000 252000
false 255000 255000
false 256000 256000
false 257000 257000
false 258000 258000
false 260000 260000
false 261000 261000
false 262000 262000
false 265000 265000
false 267000 267000
true 270000 270000
true 272000 272000
true 275000 275000
1000 Cam9_0 DetectMotion false 1000 1000
2000 Cam9_0 DetectMotion false 2000 2000
6000 Cam9_0 DetectMotion false 6000 6000
7000 Cam9_0 DetectMotion false 7000 7000
8000 Cam9_0 DetectMotion false 8000 8000
10000 Cam9_0 DetectMotion false 10000 10000
12000 Cam9_0 DetectMotion false 12000 12000
14000 Cam9_0 DetectMotion false 14000 14000
17000 Cam9_0 DetectMotion false 17000 17000
19000 Cam9_0 DetectMotion false 19000 19000
22000 Cam9_0 DetectMotion false 22000 22000
26000 Cam9_0 DetectMotion false 26000 26000
28000 Cam9_0 DetectMotion false 28000 28000
30000 Cam9_0 DetectMotion false 30000 30000
151
32000 Cam9_0 DetectMotion false 32000 32000
34000 Cam9_0 DetectMotion false 34000 34000
36000 Cam9_0 DetectMotion true 36000 36000
39000 Cam9_0 DetectMotion true 39000 39000
41000 Cam9_0 DetectMotion true 41000 41000
43000 Cam9_0 DetectMotion false 43000 43000
46000 Cam9_0 DetectMotion false 46000 46000
49000 Cam9_0 DetectMotion false 49000 49000
51000 Cam9_0 DetectMotion false 51000 51000
54000 Cam9_0 DetectMotion false 54000 54000
56000 Cam9_0 DetectMotion false 56000 56000
59000 Cam9_0 DetectMotion false 59000 59000
61000 Cam9_0 DetectMotion false 61000 61000
63000 Cam9_0 DetectMotion false 63000 63000
66000 Cam9_0 DetectMotion false 66000 66000
68000 Cam9_0 DetectMotion false 68000 68000
70000 Cam9_0 DetectMotion false 70000 70000
73000 Cam9_0 DetectMotion false 73000 73000
75000 Cam9_0 DetectMotion false 75000 75000
77000 Cam9_0 DetectMotion false 77000 77000
80000 Cam9_0 DetectMotion false 80000 80000
82000 Cam9_0 DetectMotion false 82000 82000
85000 Cam9_0 DetectMotion false 85000 85000
86000 Cam9_0 DetectMotion false 86000 86000
89000 Cam9_0 DetectMotion true 89000 89000
90000 Cam9_0 DetectMotion true 90000 90000
91000 Cam9_0 DetectMotion true 91000 91000
93000 Cam9_0 DetectMotion true 93000 93000
96000 Cam9_0 DetectMotion true 96000 96000
98000 Cam9_0 DetectMotion true 98000 98000
100000 Cam9_0 DetectMotion true 100000 100000
102000 Cam9_0 DetectMotion false 102000 102000
104000 Cam9_0 DetectMotion false 104000 104000
108000 Cam9_0 DetectMotion false 108000 108000
111000 Cam9_0 DetectMotion false 111000 111000
115000 Cam9_0 DetectMotion false 115000 115000
119000 Cam9_0 DetectMotion false 119000 119000
121000 Cam9_0 DetectMotion false 121000 121000
123000 Cam9_0 DetectMotion false 123000 123000
126000 Cam9_0 DetectMotion true 126000 126000
128000 Cam9_0 DetectMotion true 128000 128000
A.4 Trace Files from Cook and Schmitter-Edgecombe
The trace files are available from [mav]. The following trace file (adlnormal/p04.t1)
matches the complex query:
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
14:13:52.600541
14:13:54.456505
14:13:55.250537
14:13:55.398526
14:13:56.445215
M23
M01
M07
M08
M09
ON
ON
ON
ON
ON
152
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
2008-03-03
14:13:57.12047 M14 ON
14:13:58.241716 M07 OFF
14:13:58.363754 M01 OFF
14:13:58.485715 M08 OFF
14:13:58.873547 M23 OFF
14:13:58.997577 M13 ON
14:13:59.144543 M09 OFF
14:14:01.33088 M14 OFF
14:14:04.578001 I08 ABSENT
14:14:09.120758 M13 OFF
14:14:09.622629 M13 ON
14:14:12.413874 M13 OFF
14:14:13.99698 M13 ON
14:14:14.487348 M13 OFF
14:14:14.911205 M13 ON
14:14:15.950932 M13 OFF
14:14:19.244063 M13 ON
14:14:20.561695 M13 OFF
14:14:22.337199 M13 ON
14:14:24.844515 M13 OFF
14:14:33.514688 M13 ON
14:14:35.631879 M13 OFF
14:14:36.387969 M13 ON
14:14:40.667864 M13 OFF
14:14:41.618625 M13 ON
14:14:42.63431 M13 OFF
14:14:46.396516 M13 ON
14:14:55.316742 M13 OFF
14:14:55.434818 M13 ON
14:14:56.446848 M13 OFF
14:14:57.306908 M13 ON
14:15:00 asterisk START
14:15:00.476208 M13 OFF
14:15:03.787079 M13 ON
14:15:08.595738 M13 OFF
14:15:18.95146 M13 ON
14:15:19.568756 M13 OFF
14:15:44.5496 M13 ON
14:15:47 asterisk END
14:15:52.554399 I08 PRESENT
14:15:57.127844 M13 OFF
14:15:57.712722 M13 ON
The following trace file (adlerror/p20.t1) does not match the complex query:
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
12:31:59.786012
12:32:00.578721
12:32:01.441472
12:32:01.817373
12:32:02.585161
12:32:02.747191
12:32:03.562919
12:32:03.562919
M07
M09
M14
M23
M07
M01
M13
M08
ON
ON
ON
OFF
OFF
OFF
ON
OFF
153
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
2008-04-04
12:32:03.873881 M09 OFF
12:32:06.544226 M14 OFF
12:32:06.865179 M14 ON
12:32:07.888211 M14 OFF
12:32:10.45401 I08 ABSENT
12:32:15.606277 M13 OFF
12:32:16.385045 M13 ON
12:32:18.664438 M13 OFF
12:32:20.495014 M13 ON
12:32:23.804 M13 OFF
12:32:23.94962 M13 ON
12:32:24.966414 M13 OFF
12:32:26.293044 M13 ON
12:32:27.460747 M13 OFF
12:32:32.432286 M13 ON
12:32:40.164729 M13 OFF
12:32:44.595454 M13 ON
12:32:45.596171 M13 OFF
12:32:53.832791 M13 ON
12:33:01.91064 M13 OFF
12:33:05.221913 M13 ON
12:33:15 asterisk START
12:33:17.888646 M13 OFF
12:33:39.866439 M13 ON
12:33:46.72581 M13 OFF
12:34:08.509267 M13 ON
12:34:13.129167 M13 OFF
12:34:13.448077 M13 ON
12:34:15.968381 M13 OFF
12:34:31.511776 M13 ON
12:34:32 asterisk END
12:34:38.307348 M13 OFF
12:34:55.500391 M13 ON
That’s all, folks!
154