CommonSens: Personalisation of Complex Event Processing in Automated Home Care

advertisement
CommonSens: Personalisation of Complex Event
Processing in Automated Home Care∗
Jarle Søberg, Vera Goebel, and Thomas Plagemann
Department of Informatics
University of Oslo, Norway
{jarleso, goebel, plageman}@ifi.uio.no
June 2010
Abstract
Automated home care is an emerging application domain for data
management for sensors. We present a complex event processing (CEP)
system and query language that provide the abstractions of capabilities
and locations of interest to facilitate the task of the application programmer through reuse, easier personalisation, and system supported
sensor selection. To achieve these goals we have developed three independent models that represent the concepts an application programmer can use: (1) event model, (2) environment model, and (3) sensor
model. The sensor capabilities and locations of interest allow us to
decouple the event specification from a particular instance. The system investigates the particular environment and chooses sensors that
provide the correct capabilities and cover the locations of interest. The
personalisation is achieved by updating the values in the conditions,
the locations of interest and the time specifications in the queries. We
demonstrate with a use-case how easy it is to personalise a query in two
different environments, and use our proof-of-concept implementation
to analyse the performance of our system.
1
Introduction
Recent developments in sensor technology, video and audio analysis provide
a good basis for application domains like smart homes and automated home
care. These application domains involve reading sensor data from a great
number of sensors that are placed in the home. We use established concepts
from complex event processing (CEP) to process and analyse readings from
∗
Technical Report #396, ISBN 82-7368-357-5, Department of Informatics, University
of Oslo, June 2010
1
these sensors. However, there are still many unsolved challenges that hinder successful broad introduction of home care applications based on CEP
systems. In this paper we address three challenges.
First, there are many different kinds of sensors with many different capabilities. A capability identifies the real world phenomena the sensor can
measure. To complicate things, different types of sensors might have the
same capability and some sensors might have several capabilities. For instance, motion can be detected by cameras, motion detectors, or by aggregating information from several other types of sensors, e.g. by comparing
signal strength from a radio worn by the monitored person with a fixed set
of base stations. On the other hand cameras can provide additional capabilities like object and face recognition. Therefore, it is important to decouple
the sensor from its capabilities in the queries. During instantiation, the system should automatically relate the capabilities addressed in the queries to
the particular instance.
Second, any kind of environment has an impact on the coverage area of
sensors, e.g. walls reduce the signal strength of radio waves and block light.
This makes sensor placement a hard problem, both with respect to the initial
sensor placement in the home and to check whether a particular application
instance has a proper sensor installation. The locations of interest, i.e., the
places where events are expected to happen, also differ among the instances.
Third, to describe behavioural patterns of a monitored person a lot of
domain knowledge is necessary. Storf et al. [4] report that they needed
92 rules to describe toilet usage, personal hygiene and preparation of meals.
However, the activities of interest are relevant for many people; for instance,
fall detection and medication usage are in general the most important activities to recognise [6]. Reuse of queries that describe these activities, as
well as the ones investigated by Storf et al. would simplify the work of the
application programmer considerably. Therefore, we propose utilising general queries, which address capabilities, locations of interest and temporal
properties, and which only need smaller adjustments to match the current
instance.
In order to meet these challenges we define three independent models: (1)
an event model to identify states and state transitions in the real world that
are of interest, (2) an environment model to describe the physical dimensions
of the environment and the impact it can have on various signal types, and
(3) a sensor model to describe the capabilities of sensors, their coverage and
the signal types they are using. Our system implements the models. The
sensor model separates the sensors from their capabilities. We combine the
sensor model and the environment model to address locations of interest
and placement of sensors. In addition, the core contributions of this work
include an event language that uses the models and supports reuse of the
model instances. The system is extensible so that any kind of emerging
sensor and multimedia analysis algorithm can be integrated.
2
Conceptual model of the real world
Events
States and state transitions
Describe events
Queries
Detect events
Detect states
Sensors
Figure 1: The relation of the core elements in our conceptual model of the
real world.
The reminder of this paper is organised as follows: In Section 2 we
describe our three models and the event language. To support our claims,
Section 3 demonstrates a use-case evaluation and a performance evaluation
of our proof-of-concept implementation. We discuss related work, conclude
and address future work in Section 4.
2
Models
We define three models to identify and describe the concepts and semantics
of events, the environment where events happen, and the sensors that are
used to obtain information about the environment.
2.1
Event Model
In our conceptual model of the real world, everything that happens can be
modeled through states and state transitions.
Definition 2.1 An event e is a state or state transition in which someone
has declared interest. A state is a set of typed variables and data values. A
state transition is when one or more of the data values in a state change so
that they match another state.
Which type of variables to use depends on the instantiation of the system. Not all states and state transitions in the real world are of interest.
Therefore, we view events as subsets of these states and state transitions.
Figure 1 relates the core elements of the approach to our conceptual model
of the real world. The application programmer uses declarative queries to
describe events.
Our event definition works well with existing event processing paradigms.
For example, in publish/subscribe systems, events can be seen as publications that someone has subscribed to.
To identify when and where an event occurs and can be detected, temporal and spatial properties are important to specify. For addressing temporal
properties we use timestamps, which can be seen as discrete subsets of the
continuous time domain and time intervals.
3
Definition 2.2 A timestamp t is an element in the continuous time domain
T : t ∈ T . A time interval τ ⊂ T is the time span between two timestamps
tb (begin) and te (end). τ has a duration δ = te − tb .
To differ events from other states and state transitions, it is important to
have knowledge about the spatial properties of the events, i.e., where in the
environment they happen. These spatial properties are specified through
locations of interest.
Definition 2.3 A location of interest loi is a set of coordinates describing
the boundaries of an interesting location in the environment.
In order to simplify application development it is a common approach to
specify high level events. Higher level events, in turn, are composed out of
lower level events [3]. These two event types are called atomic events and
complex events. An event that cannot be further divided into lower level
events is called an atomic event.
Definition 2.4 An atomic event eA is an event with time and a location
of interest: eA = (e, loi, tb , te ). For the attributes, (tb , te ∈ T ∨ ∅) and
(|loi| = 1 ∨ loi = ∅). If used, the timestamps are ordered so that tb ≤ te .
Atomic events can be related concurrently or consecutively.
Definition 2.5 Two atomic events eAi and eAj are concurrent iff
∃tu , (tu ≥ eAi .tb ) ∧ (tu ≥ eAj .tb ) ∧ (tu ≤ eAi .te ) ∧ (tu ≤ eAj .te ).
For two atomic events to be concurrent there is a point in time where
there is an overlap between both atomic events. Two events are consecutive
when they do not overlap.
Definition 2.6 Two atomic events eAi and eAj are consecutive iff eAi .te <
eAj .tb .
A set of atomic events can be part of a complex event.
Definition 2.7 A complex event eC is a set of N atomic events: eC =
{eA0 , . . . , eAN −1 }.
2.2
Environment Model
The environment is described by a configuration of objects. These objects
could be walls, furniture, etc. Once an object is defined, it can be reused in
any other instantiation. Objects have two core properties; their shape, and
how they impact signals, i.e., permeability.
4
Definition 2.8 A shape s is a set of coordinates:
s = {(x, y, z)0 , . . . , (x, y, z)N −1 }
The triplets in s describe the convex hull (boundary) of the shape. All triplet
values are relative to (x, y, z)0 , which is referred to as base.
While each object has one shape, it can have different permeability values. For instance, a wall stops light signals while a radio signal might only
be partially reduced by the wall. Therefore, it is important to identify how
permeable objects are regarding different types of signals.
Definition 2.9 Permeability p is a tuple: p = (val, γ). val ∈ [−1, 1] is the
value of the permeability. γ denotes which signal type this permeability value
is valid for.
The lower the value for p.val (the value val in p) is, the lower the permeability is. If the permeability value is 0, the signal does not pass. While
if p.val is less than 0, the signal is reflected. The object is defined as follows.
Definition 2.10 An object ξ is a tuple: ξ = (P, s). P = {p0 , . . . , pN −1 } is
a set of permeability tuples. s is the shape.
We use ξ.P to support that an object can have permeability values for
many different signal types. Finally, the environment is defined as an instance of a set of related objects.
Definition 2.11 An environment α is a set of objects: α = {ξ0 , . . . , ξN −1 }.
Every ξi ∈ α \ {ξ0 } is relative to ξ0 .
In the definition of the shape s we state that all the triplet values in the
shape ξi .s of an object are relative to ξi .s.(x, y, z)0 . In an environment αy
where ξi is located, ξi .s.(x, y, z)0 is relative to ξ0 .s.(x, y, z)0 , which is set to
(0,0,0).
2.3
Sensor Model
Sensors read analogue signals from the environment and convert these into
data tuples. Hence, a data tuple is the information that a sensor has obtained
about a state in the real world. In addition to this, the sensor model should
achieve two objectives. First, when we have described the events of interest,
the system has to determine which type of sensors to use. Second, events
might happen over time, so we want the sensors to utilise historical and
stored data together with recent data tuples. Since a sensor has to produce
data tuples, we use the terms sensor and tuple source interchangeably.
In order to meet the first objective, each sensor should provide a set of
capabilities.
5
Definition 2.12 A capability c is the type of state variables a sensor can
observe. This is given by a textual description: c = (description).
Capabilities like temperature reading or heart frequency reading return
values of type integer. However, capabilities might be much more complex,
like face recognition or fall detection. To capture all these possibilities in
our model we use a string to describe sensors such that the description in
a particular implementation can be anything from simple data types, such
as integers, to XML and database schemes. The application programmer
should not address particular sensors, rather sensor capabilities. To enable
the system to bind capabilities to correct sensors it is necessary to describe
capabilities based on a well defined vocabulary (or even on an ontology).
In order to meet the second objective, we define three distinct types
of sensors. The physical sensor is responsible for converting the analogue
signals to data tuples. The external source only provides stored data, and
the logical sensor processes data tuples from all the three sensor types.
Definition 2.13 A physical sensor φP is a tuple: φP = (cov, γ, f, C). cov
denotes the coverage of the physical sensor. γ is the type of signal this
tuple source sends or receives. f is the maximal sampling frequency, i.e.,
how many data tuples the physical sensor can produce every second. C =
{c0 , . . . , cN −1 } is the set of capabilities that the sensor provides.
The physical sensor is limited to only observe single states in the home.
The coverage of a physical sensor can either address specific objects or an
area. The latter is denoted coverage area. Usually, the producer of the
physical sensor defines the coverage area as it is in an environment without
obstacles. When the physical sensor is placed in the environment, the coverage area might be reduced due to objects in the environment that have
permeability tuples that match the signal type of the current physical sensor.
The external source φE includes data that is persistently stored. The
main purpose of an external source is to return data tuples from the storage.
Definition 2.14 The external source φE is an attribute: φE = (C). C is
the set of the capabilities it provides.
In contrast to the physical sensor, the external source does not obtain
readings directly from the environment. Thus, we do not include attributes
like coverage. For instance, we allow DBMSs and file systems to act as
external sources, as long as they provide data tuples that can be used by
our system. The logical sensor performs computations on data tuples from
other tuple sources.
Definition 2.15 The logical sensor is a tuple: φL = (Cd , aggL , Cp , f ). Cd
is the set of all the capabilities it depends on and agg L is a user defined
6
PS: Accelerometer(value)
ES: User(personID)
LS: FallDetected(personID)
ES: FaceRecognition(personID)
LS: FallDetected(personID)
ES: FallDetected(true/false)
PS: Camera(matrix)
PS: TakingMedication(type)
PS: Camera(matrix)
ES: MedicationTaken(type)
LS: TakingMedication(type)
PS: PhysicalSensor, ES: ExternalSource, LS: LogicalSensor
Figure 2: Examples of capability hierarchies for detecting falls and taking
medication.
function that aggregates the data tuples from Cd . Cp is the set of capabilities
it provides. f is defined as for physical sensors but depends on agg L and
Cd .
When the logical sensor receives a request to produce a data tuple, it
requests its tuple sources, processes the values it receives and creates a new
data tuple that contains the results. The logical sensor does not depend on
specific tuple sources, only the capabilities, i.e., any type of tuple source can
be part of φL .Cd as long as it provides the correct capabilities.
We illustrate the sensor model and capabilities with an example taken
from automated home care. Figure 2 shows capabilities for detecting falls
and that medication is taken. FallDetected is provided by a logical sensor that either depends on the accelerometer capabilities together with the
information about the monitored person (personID), or the combination of
the Camera capability and external sources that provide capabilities for face
recognition and fall detection. TakingMedication is provided by a physical
sensor, which e.g., uses RFID to detect that the monitored person holds the
medication, or a logical sensor that depends on the capabilities Camera and
MedicationTaken.
2.4
Query Based Event Language
In this section we describe our CEP query language, which uses capabilities
from the sensor model, and locations of interest and timestamps from the
event model to support reuse. In order to detect an event, the application
programmer assigns conditions to the capabilities. The query that addresses
one atomic event is called an atomic query.
Definition 2.16 An atomic query qA is described by a tuple: qA = (cond,
loi, tb , te ). cond is a triplet (c, op, val), where c is the capability, op ∈ {=, 6=
7
, <, ≤, >, ≥} is the operator, and val is the expected value of the capability.
loi, tb , te specify the spatial and temporal properties.
When it is needed to describe complex events, two or more atomic queries
have to be used. These are called complex queries. Complex queries describe
complex events. In the following, the term query, denoted q, covers both
atomic and complex queries.
Definition 2.17 A complex query qC is an ordered set of atomic queries,
and logical operators and relations ρi between them: qC = {qA0 ρ0 . . . ρN −2 qAN −1 }.
If the complex query only consists of one atomic query, ρ0 = ∅.
We use one temporal relation to describe that two atomic events are
consecutive, i.e., the followed by query relation (→).
Definition 2.18 The followed by relation →: {qz → qx | qz .te < qx .tb }.
The logical operators in the query language are ∧, ∨ and ¬. If any other
logical operator is needed, these three operators can be used to define it.
With respect to the temporal properties of the events, the application
programmer should be able to write two types of queries. The first type
should explicitly denote when an event should happen, e.g. between 16:00h
and 18:00h. The other type should denote how long an event lasts when it
is detected. These two types of queries are called timed queries and δ-timed
queries. A timed query has concrete tb and te values, which means that the
two timestamps denote exactly the duration of the event. δ tells how long the
events should last when they first start. These are denoted by overloading
the semantics and setting only tb . The timed or δ-timed query can be
percent-registered (P -registered) or not. A query that is not P -registered
requires that all the readings from the relevant tuple sources match the state
during the time interval. Depending on the query, P can either indicate that
within the interval, the state should be satisfied minimum or maximum a
given percentage of the time window. A complex query can also be timed
or δ-timed, and P -registered.
In addition, the application programmer can use the five concurrency
operators to describe temporal dependency between queries.
Definition 2.19 The concurrency operators:
{Equals(qz , qx )
{Starts(qz , qx )
{Finishes(qz , qx )
{During(qz , qx )
{Overlaps(qz , qx )
|
|
|
|
|
qz .tb
qz .tb
qz .tb
qz .tb
qz .tb
8
= qx .tb ∧ qz .te
= qx .tb ∧ qz .te
< qx .tb ∧ qz .te
> qx .tb ∧ qz .te
> qx .tb ∧ qz .te
= qx .te }
< qx .te }
= qx .te }
< qx .te }
> qx .te }
3
Evaluation
We describe a use-case study to demonstrate the low effort required from the
application programmer to personalise the queries. Furthermore, we report
on some first performance evaluation of our proof-of-concept implementation. For the sensor model and the environment model we have implemented
a GUI that allows the application programmer to move and place sensors
so that they cover locations of interest and objects in the environment. The
event processor is a state machine that is generated from the query. The
system relates the capabilities in the implementation with the sensors in
the current environment. Depending on the data tuples the state machine
receives from the sensors, it either performs a state transition or remains in
the current state. The implementation is written Java and runs on a dual
core 1.2 GHz machine with 2 GB memory.
3.1
Use-case Study
In order to show that personalisation can be performed by simply changing
a few parameters, we focus on queries related to detecting the activities
falling and taking medication (see Figure 2). These activities should be
detected in two different instances with minimal rewriting of the queries.
The instances are excerpts from two environments taken from related work;
the WSU smart home project [1] and MIT’s PlaceLab apartment [2]. The
instances are shown in Figure 3 and are realised through our proof-of-concept
implementation and are equipped with sensors from our sensor model. The
walls are all objects with permeability value 0 for light and 0.01 for radio
signals.
For the fall detection, timing is not relevant because a fall can happen
at any time of the day. Hence, the timing should not be specified. The fall
can also happen everywhere in the home and the system has to constantly
pull the sensors that provide FallDetected. The query that detects the
fall is then very simple: (FallDetected = personID). The value personID
identifies the person, and must be updated for the particular instance. The
personalisation process is done when the system investigates the available
sensor configurations that provide the capability FallDetected and investigates whether the current instance provides these sensors. If the sensors
are provided, the system instantiates the query and starts reading the data
tuples from the relevant sensors. If not, the system informs the application
programmer that the query can not be instantiated and shows the list of
sensors that need to be in the environment.
In order to detect that the monitored person takes medications, it is
sufficient to have sensors that provide the capability TakingMedication, and
which return the type of medication that has been taken. If the monitored
person should take several medications, it is sufficient to use the ∧ operator
9
Figure 3: Two environments with different setup.
between each of the medication types. The During concurrency class can be
used to describe the temporal relation. The first part of the query identifies
that the medications are taken while the second part of the query identifies
that the monitored person is within the location of interest related to the
medication cupboard. This location of interest is called MedCupboard and
is defined with different coordinates for the two environments. The query
that the application programmer has to write is based on this template:
during(((TakingMedication = Med1_0_0,timestamps) &&
... && (TakingMedication = MedN_0_0,timestamps)),
(DetectPerson = personID,MedCupboard,timestamps))
In order to show that two different types of sensors can provide the same
capabilities, we have placed two cameras in the kitchen (Figure 3 a)) and
RFID tags in the bathroom (Figure 3 b)). The camera in the medicine cupboard covers the location of interest MedCupboard. The coverage area of the
two cameras has an angle of 90◦ , and the coverage area of the camera inside
the cupboard is reduced by the panels. In the bathroom, MedCupboard is
covered by three active RFID tags named Med1 0 0, Med2 0 0 and Med3 0 0.
The three tags are also attached to the medication and provide the capabilities TakingMedication and DetectPerson. Affected by the walls in the
medication cupboard, the coverage areas of the tags are shown as small irregular circles. This is automatically calculated by our system based on the
signal type and the permeability values of the walls. The wrist-worn RFID
reader of the monitored person returns the correct readings when it is within
the coverage areas of the tags.
By using the query above, the application programmer only needs to
personalize the types of medications, the timestamps and the coordinates of
10
the locations of interest. For instance, the monitored person Alice should
take all her medication between 08:00h and 09:00h every day. This process
should take maximum six minutes and taking each medication should take
one minute. The last part of the query can be rewritten as (DetectPerson
= Alice, MedCloset, 08:00h, 09:00h, min 6%). The timestamps in each
of the queries addressing TakingMedication are rewritten to for instance
(TakingMedication = Med1 0 0, 1m). For the monitored person Bob, each
of the medications should be taken at different times during the day. The
application programmer needs to write one query for each of the medications,
hence only one medication is supposed to be taken for each of the queries.
For each of the queries the timestamps and coordinates of the locations of
interest are simply updated to match the required temporal behaviour.
3.2
Performance Evaluation
It is important that the system detects the events in near real-time, i.e., that
the system detects events when they happen. In this section we evaluate
the processing time and scalability of our system with respect to real-time
detection of events, performance and scalability. We want to answer two
questions through our experiments:
1. How does the number of sensors to be evaluated influence the processing time? In some applications there might be need for a considerable
number of sensors.
2. How does the complexity of queries affect the processing time?
To answer the first question, we need to increase the number of sensors.
To ensure that the evaluated number of sensors is increasing at all evaluation
steps for each experiment, we add to each original sensor ten additional
sensors for each experiment. Thus, in the first experiment we start with
the two sensors that provide DetectPerson. We perform additionally four
experiments with 22, 42, 62, and 82 sensors in total. The new sensors inherit
the shapes and capabilities from the sensors that are already there.
The second question is answered by increasing the number of concurrent queries that are processed. We do this by increasing the number of
medications that the monitored person has to take. We increase from 1 to
51 medications. Even though it is not realistic that the monitored person
should take 51 medications, we show that our system manages to handle
that amount of concurrent atomic queries. We keep in this experiment the
number of sensors fixed and use the original installation with two sensors.
We use synthetic sensors and workloads which are implemented through predetermined data value sequences stored in files. This enables us to
repeat experiments and to know the ground truth, i.e., the sensor readings
that reveal the behavioural patterns of the monitored person. Since our
11
14
Experiment #1
Experiment #2
Experiment #3
Experiment #4
Experiment #5
12
Time (milliseconds)
10
8
6
4
2
0
0
1
2
3
4
5
6
Time steps
7
8
9
10
11
12
Figure 4: Processing time with 2, 22, 42, 62 and 82 sensors in the environment.
implementation uses synthetic sensors, we cannot include the time it would
take to gather data tuples from the sensors over a network. Hence, we
measure the event processing time as the duration from the point in time
when all the relevant data tuples are collected until the system has evaluated
the tuples. We call this a discrete time step.
In order to use a relevant environment we use the kitchen from Figure
3 a) with a query based on the one that detects that the monitored person
takes medication. The first part of the query - taking medication - should
take 4 time steps. The monitored person should stand by the medicine
cupboard in 8 time steps to satisfy the second part of the query. At time
step 3 the monitored person moves in the coverage area of the camera that
covers MedCupboard. This causes the second part of the query to start. At
time step 4 the monitored person takes the medication. This starts the
first part of the query. At time step 7 the first part of the query finishes
successfully and at time step 10 the second part finishes successfully.
The average processing times for the first experiment are shown in Figure
4. The x-axis shows the time steps of the simulation and the y-axis shows
the processing time in milliseconds (ms). The plot shows that the processing
time increases with the number of sensors in total. There are two peaks at
time step 7 and time step 10 which are related to successfully finishing the
two parts of the query. In total, the maximum processing time is 12.8 ms
for the scenario handling the readings from 82 sensors.
Figure 5 shows the processing time and it is clear from the plot where
the processing starts. The four time steps used by the first part of the query
clearly show a linear increase in the processing time between time steps 4
and 7 when the number of concurrent queries increases. There is also a
slight peak in the plot when the second part of the query starts at time step
12
60
1 medication
11 medications
21 medications
31 medications
41 medications
51 medications
50
Time (milliseconds)
40
30
20
10
0
0
1
2
3
4
5
6
Time steps
7
8
9
10
11
12
Figure 5: Processing time with an increasing number of concurrent queries.
3 and stops at time step 10. The average processing time for 51 concurrent
queries is only 55.1 ms.
We have given the system workloads up to 82 sensors that report similar
readings in the first experiment, and 51 concurrent queries in the second
experiment. The experiments show that even for the highest workload the
event processing part of our system handles real-time processing of data
tuples very well.
4
Conclusion
In this paper we define three models and a language that simplify the task
of application programmers to match specific instances, i.e., homes, sensors and monitored persons. We support our claims by showing how the
same query applies to two different environments with two different sensor
configurations.
Challenges related to personalisation of home care systems is addressed
by Wang and Turner [5], where they use a policy-based system using event,
condition, action (ECA) rules where certain variables can be changed for
each instance. They also provide the possibility of addressing sensor types
instead of sensors directly, provided by a configuration manager that finds
the correct sensors. However, they do not provide separate models for the
events, sensors and the environment to show how one capability can be
provided by several different types of sensors in different environments.
Our event model is inspired by the semantics of atomic and complex
events as well as abstractions to higher level queries from Luckham [3].
Our event language extends Luckham’s contributions by explicitly using
capabilities and locations of interest in the atomic events.
13
To the best of our knowledge, there exist no system that combines all
aspects and addresses them in a structured way related to complex event
processing from sensor data.
Event processing needs to be done in real-time and we can show with
our prototype that it scales well with respect to number of sensors and
concurrent atomic queries. The event processor in our system only uses 12.8
milliseconds to evaluate 82 sensors, and only 55.1 milliseconds to evaluate
51 concurrent queries.
Future work is to extend our system with statistical models for personalisation of timing. With respect to covering locations of interest, we will
investigate algorithms for optimising sensor placement. In addition we are
currently instrumenting our labs and offices with cameras and radio-based
sensors to create a real world test-bed for our system.
References
[1] D. J. Cook and M. Schmitter-Edgecombe. Assessing the quality of activities in a smart environment. Methods of Information in Medicine,
2009.
[2] S. S. Intille, K. Larson, J. S. Beaudin, J. Nawyn, E. M. Tapia, and
P. Kaushik. A living laboratory for the design and evaluation of ubiquitous computing technologies. In CHI ’05: CHI ’05 extended abstracts
on Human factors in computing systems, pages 1941–1944, New York,
NY, USA, 2005. ACM.
[3] D. C. Luckham. The Power of Events: An Introduction to Complex
Event Processing in Distributed Enterprise Systems. Addison-Wesley
Longman Publishing Co., Inc., Boston, MA, USA, 2001.
[4] H. Storf, M. Becker, and M. Riedl. Rule-based activity recognition framework: Challenges, technique and learning. In Pervasive Computing Technologies for Healthcare, 2009. PervasiveHealth 2009. 3rd International
Conference on, pages 1 –7, 1-3 2009.
[5] F. Wang and K. J. Turner. Towards personalised home care systems.
In PETRA ’08: Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments, pages 1–7, New
York, NY, USA, 2008. ACM.
[6] D. H. Wilson. Assistive Intelligent Environments for Automatic In-Home
Health Monitoring. PhD thesis, Robotics Institute, Carnegie Mellon
University, Pittsburgh, PA, September 2005.
14
Download