Appendix 1 – Code

advertisement
Detection of Social Interaction Using
Consumer Grade Hardware via Device
Free Passive Localisation
Timothy Dougan
B00571531
BSc Computer Science
Supervisor – Kevin Curran
2
Abstract
Mobile devices which make use of 802.11 Wi-Fi are ubiquitous in modern society. At the
same time, there is an unmet need in research and monitoring applications, and
particularly in those relating to service and healthcare scenarios, to accurately detect the
occurrence and hence frequency and duration of human interaction between subjects.
Various sensor modalities exist that are able to perform localization of human subjects
with useful degrees of accuracy, but in all cases they are either expensive, inflexible, or
prone to influencing subject behaviour via the Hawthorne or observer effect. Given the
ubiquity of mobile devices, it is the contention of this paper that a system which localizes
human presence based on the human body’s obstructive effects on RF transmissions
through interpretation of perturbation of the Received Signal Strength values generated
during transmission, may offer a system that is both inexpensive and flexible, while
avoiding the need for direct subject participation, and thus reducing the impact of the
Hawthorne effect. This paper investigates the operation, and compares and evaluates
the performance, of different sensing systems, before proposing the investigation of
localization via a network of consumer grade mobile phones and wireless access points,
to discover whether such a system could deliver the accuracy required to detect instances
of interpersonal proximity, while retaining the advantages of cost and flexibility. This
could then be used in conjunction with audio data, also captured by the mobile devices,
to infer instances of human interaction.
3
Acknowledgements
I would like to thank Dr Kevin Curran for his help and guidance during this project, and
Kathryn for keeping my head above water.
Declaration
I declare that this is all my own work and does not contain unreferenced material copied
from any other source. I have read the University’s policy on plagiarism and understand
the definition of plagiarism. If it is shown that material has been plagiarised, or I have
otherwise attempted to obtain an unfair advantage for myself or others, I understand
that I may face sanctions in accordance with the policies and procedures of the
University. A mark of zero may be awarded and the reason for that mark will be
recorded on my file.
Signed ___________________
4
Contents
Abstract ................................................................................................................................ 3
Acknowledgements.............................................................................................................. 4
Declaration ........................................................................................................................... 4
Contents ............................................................................................................................... 5
1
Introduction ................................................................................................................. 8
2
Literature Review ....................................................................................................... 10
2.1
Active RSSI Localisation ....................................................................................... 11
2.2
Passive Localisation ............................................................................................. 13
2.2.1
Passive Non-RSSI Localisation ...................................................................... 13
2.1.2
Passive RSSI Detection ................................................................................. 15
2.1.3
Passive RSSI Localization via Mapping/Fingerprinting ................................ 16
2.1.4
Passive RSSI Localization – Alternate Systems............................................. 18
2.3
3
Summary ............................................................................................................. 19
Requirements Analysis and Specification .................................................................. 21
3.1
Problem Statement ............................................................................................. 21
3.2
Lifecycle Model ................................................................................................... 21
3.3
Requirements Analysis ........................................................................................ 22
3.3.1
Functional Requirements ............................................................................. 22
3.3.2
Non-functional Requirements ..................................................................... 22
3.4
Functional Specification ...................................................................................... 23
3.4.1
Data Recording............................................................................................. 23
3.4.2
Data Processing ............................................................................................ 23
5
4
5
Project Planning ......................................................................................................... 25
4.1
Expected Outcomes and Deliverables................................................................. 25
4.2
Milestones ........................................................................................................... 25
4.3
Time Allocation.................................................................................................... 26
4.4
Risks ..................................................................................................................... 27
Design......................................................................................................................... 28
5.1 Design Rationale ...................................................................................................... 28
6
5.2
Hardware ............................................................................................................. 29
5.3
Software .............................................................................................................. 29
5.4
Architecture......................................................................................................... 30
5.5
Class Diagram ...................................................................................................... 31
5.6
Interface .............................................................................................................. 32
Implementation ......................................................................................................... 33
6.1
6.1.1
Sensor Classes .............................................................................................. 33
6.1.2
Timing Classes .............................................................................................. 33
6.1.3
Data Management Classes........................................................................... 33
6.1.4
Usability Classes ........................................................................................... 34
6.2
7
Classes ................................................................................................................. 33
Code..................................................................................................................... 34
Results and Evaluation ............................................................................................... 39
7.1
Test One – Same room, persons present 1m from device. ................................ 41
7.2
Test Two – Same room, persons present 0.5m from device. ............................. 42
References ......................................................................................................................... 49
6
7
1
Introduction
The ability to track human motion or detect when a person is present in an area is
valuable in a variety of circumstances. Besides surveillance and security applications, for
which the utility is obvious, there are multiple service and care scenarios in which these
functions are also of use. The IR sensor on an automatic door would be a simple
example of the former, while a tracking bracelet worn by a dementia patient is the same
to the latter. Research on human behaviour, focussed on capturing the normal activity
of human subjects, also benefits from these technologies, in many contexts and via
diverse monitoring strategies.
In each of these three cases the requirement is to obtain information on human activity
at the necessary level of accuracy, while minimising intrusion and cost. Costing must
cover both equipment purchase and installation time/fees, and is minimised for obvious
reasons. Intrusiveness is more subtle, as it can have an indirect impact on accuracy or
ease of use. The Hawthorne effect (Landsberger, 1958) is that in which individuals
modify their behaviour based upon the readily apparent fact that they are being
observed in a given situation. The presence of a visible observer or monitoring device
causes people to behave other than they usually would, thereby decreasing the accuracy
of any recorded research data. Likewise, a system which requires that participants wear
or interact with unwieldy equipment is less easy to use.
Therefore, the ideal system for monitoring human activity is one which is cheap,
unobtrusive, and sufficiently accurate. Device free passive localisation (DFPL) is one
class of such human activity monitoring systems. Passive localization differs from active
localisation insofar as that in order to perform active localisation, participants or users
must carry some form of monitoring device upon their person. With reference to the
previous examples, an automatic door is a passive sensor, whereas a locator bracelet
detector is an active sensor. Cost for such devices may be nominal, but any approach
that requires active participation will be intrusive to some extent. DFPL avoids these
concerns by functioning without any such requirement – it is “device free” in that no
equipment must be carried by participants.
8
The particular implementation of DFPL discussed in this paper uses a standard Wi-Fi
access point (AP) linked to a number of ordinary smartphones dispersed around a room
in static positions as its sensing infrastructure, and requires no further equipment. In
this way it satisfies the requirement for cost, in that smartphones are ubiquitous and
therefore represent a sunken cost, as too with Wi-Fi APs. Also, there is minimal
overhead for installation, as all that is required is for the mobile phones to be placed at
certain positions in a room within range of an AP. Therefore, this system is both cheap
and unobtrusive.
The sensing component takes advantage of the absorption and scattering effects of
human bodies on RF radiation, particularly around the range of 2.4 GHz, which is
common to all Wi-Fi standards, and which also is strongly absorbed by water, of which
human bodies are largely composed. Study of this property was carried out by Kara and
Bertoni (2006, p.288), discovering that
“Environmental blockage, such as by furniture and human bodies in the vicinity
of the receiver and/or transmitter, cause deep fades that might limit the range
and the performance of short-range wireless systems.”
Although this is an issue for maintenance of a connection which becomes obstructed by
a human body, it does mean that a wireless link between a device and an AP has some
degree of ability to sense changes in the level of obstruction between the two.
Furthermore, they found that “the signal level exceeds the average level when the
person is close to the link but not blocking it” (Kara and Bertoni, 2006, p.287), which
means that as a person enters, occludes, and leaves the main body of a signal path, their
passage creates a series of perturbations in the Received Signal Strength Indication
(RSSI) recorded on receipt of a data packet that can be used to infer motion through that
path.
A specific area of research in human behaviour is that of human interaction. This could
be of use in social science, or in study into care for the elderly or those with physical or
mental impairment, in which the frequency of human interaction is believed to form an
important role in the quality of care. A simplistic description of human interaction is
that it requires two or more people to be present, and to be speaking to each other. An
9
extension of the usefulness of mobile devices is also possible, as in addition to Wi-Fi
connectivity, mobile phones also have microphones. Therefore, this project seeks to
combine the technique of RSSI DFPL to detect when several individuals are in close
proximity, with mobile phone audio sensing, to perform measurement of human
interaction, via a low cost sensing network comprised of generic commercial devices. It
will investigate whether such a system can deliver a useful level of measurement
accuracy in applications such as those discussed above. It will also attempt to describe a
possible architecture for creating such a system.
2
Literature Review
The ubiquity of mobile devices in modern society offers opportunities previously not
available in all research and monitoring applications. One possible implementation uses
RSSI as the sensing medium. RSSI is a MAC layer indication of the strength of a received
packet. It is useful mainly because it is easily accessible – any off-the-shelf device which
makes use of 802.11 standards will have an RSSI calculator integrated into its circuitry.
This makes mobile phones ideal for any implementation of RSSI localization which seeks
to minimise cost, although more specialised hardware can provide reduce systematic
inaccuracy. Inaccuracy is introduced into any such localization system by virtue of the
properties of RF propagation - in most indoor spaces there will be furniture, walls and
miscellaneous obstacles of varying sizes, shapes, and material construction.
Heterogeneous environments of this type firstly introduce multiple factors which
influence RF propagation. For example, multipath propagation is the case in which, due
to the varying reflective properties of different surfaces, multiple routes exist between
sending and receiving devices. Packets leaving one may reach the other directly, giving a
short time of flight (ToF), high RSSI, and particular angle of incidence, whereas a packet
reflected off several surfaces before reaching its destination may have a much longer
relative ToF, lower RSSI, and completely different incident angle. Each of these values
can be used in localization, and therefore multipath propagation is a serious problem for
any localisation system based on RF transmission. Likewise, RF transmission is subject to
interference from the many other RF sources present in the modern setting. This is not
limited to overlapping 802.11 devices active on similar frequencies – fluorescent
10
lighting, cordless console controllers or telephones and even microwave ovens
(Murakami et al., 2003) can interfere with a sensing network of 802.11 devices.
Another challenge to any system which is desired to be easily deployable is that indoor
environments are as stated heterogeneous – it is not possible to design a system which
is hard-coded to compensate from sources of error in all contexts. More expensive
hardware can ameliorate these and other issues, but it is however possible to design
adaptability into such a system using off-the-shelf hardware – the difficulty of such
design is the main obstacle to effective use of RSSI in a low cost context. Differing
implementations exist, and are discussed below.
2.1
Active RSSI Localisation
RSSI can be used to perform localization in both an active and a passive mode. An
advantage of the active mode is that it gives a direct reading of the signal strength
between a carried mobile node and multiple access points. Given that AP locations are
generally known and that signal overlap is desirable (multiple APs covering the same
area on different channels) this allows for various forms of RSSI processing to provide
relative or absolute localisation of a connected mobile node. Luo et al. (2011) describe
and compare some implementations of this, all of which base their calculations on the
fact that signal strength is inversely proportional to the distance between AP and node.
With three or more APs in range, a node can be localised in two dimensions through
several forms of algorithmic triangulation.
For example, a mobile node in Ring Overlapping Circle RSSI first receives the distance
between each AP in the network. These distances form radii, which are grouped into
those greater than or lesser than the inferred distance between each AP and the mobile
node. If the distance to the first AP (inferred via RSSI) is greater than the distance
between that AP and a second, then the mobile node lies outside of the circle centred
on the first AP, whose circumference intersects the second. If the distance between the
first and second is less than that between the AP and the mobile node, it lies within the
corresponding circle. Overlaying these circles in the manner of a Venn diagram, the
mobile node narrows down its location, with its estimation accuracy growing with the
11
addition of each AP pair. If the location of each AP is known, then the localisation is
absolute – if not, then the location is relative. Luo et al. achieved average accuracy of
between 1.5m and 3.1m with this system – given that this was tested in a cluttered live
environment, this represents a relatively useful result for some localization scenarios.
Conceptually simpler algorithmic implementations are also viable. Jiing-Yi et al. (2012)
use an anchor node network deployed in a regular arrangement in which the target
mobile node uses a calibrated power decay curve to determine the nearest three
anchors – it then builds the location of each anchor into a triangle with one anchor at
each vertex. The position of the target is estimated with good accuracy to be at the
centroid of the triangle thus formed. This implementation deals with the problems
inherent to RF-based localization schemes (multipath etc.) through its clever calibration
method. The power decay curve is used to establish the minimum power required to
establish a connection with a node at a given range. When sensing, the transmission
power is slowly increased until it is only just sufficient to reach the target. This method
reduces the impact of multipath propagation greatly, as packets which travel by a longer
route do not have the power necessary to be sensed by the receiver. It does however
suggest that this system may be more susceptible to interference. Nonetheless, in a test
environment cluttered similarly to that of Luo et al., Jiing-Yi et al. achieved an average
error of around 0.4m, showing that RSSI localization systems can be implemented in
realistic environments with a high degree of accuracy.
Some experimental implementations were less successful, and in many cases this seems
to be because issues such as multipath etc. have not been accounted for. For example,
Dong and Dargie (2012) found RSSI-based localization to be “unreliable as the only input
to determine the location of a mobile node in an indoor environment”, which, even after
various smoothing methods, recorded very high instability in their data, especially at
ranges closest to the receiver. This may have been in part due to their test environment
– a long narrow corridor bordered by a concrete wall, and a long window. It seems
reasonable to expect that this environment would produce severe multipath effects –
this illustrates the fact that any implementation of RF-based localization must take such
characteristics of the medium into account.
12
Nonetheless it can be seen that properly implemented RSSI-based active localization can
potentially offer localization with accuracy around the range of 1m, which is sufficient
for the purpose of detecting when two humans are within proximity suitable for
interaction. However, it is the intent of this project to provide this function in the
passive mode, and so the passive mode in general will now be discussed.
2.2
Passive Localisation
Passive localisation is necessarily less accurate than active localisation for a given
relative sensor resolution, given that more must be determined by inference rather than
direct measurement – there is not necessarily direct proportionality between any single
sensor reading and the range of the sensor to the target.
Even when a direct
proportionality does exist, the confidence boundaries on each range estimate will be
larger, and more subject to interference, given that the target is neither the source nor
the receiver of any signal. Despite this, it is still possible to estimate the position of a
target in the imaging area of a sensor network, via inference from either direct
perturbations of the substance of the system itself, or indirect perturbations in the
medium of the environment, sensed by the system. A pressure plate upon which a
target rests might give a reading of the former type, whereas a camera which receives
light reflected from a target might give a reading of the latter type.
2.2.1 Passive Non-RSSI Localisation
Multiple localization approaches which do not use RSSI have been available for many
years, but all have inherent drawbacks outside of spaces specialised to some extent
towards monitoring. At the same time, they do not suffer from the multiple drawbacks
of dealing with RF propagation.
Therefore it is worthwhile to consider some
implementations to compare their suitability to the stated scenario (low cost, ad hoc,
and easily deployable) with RSSI-based approaches. For example, it was found to be
possible to perform motion tracking of several individuals simultaneously between
multiple surveillance cameras (Cai and Aggarwal, 1996). Later the algorithmic back-end
of this sensor modality was expanded to perform accurate tracking without the heavy
processing load required by pattern extraction/feature matching (Khan et al., 2001;
Beleznai et al., 2005), but, these improvements aside, the infrastructure requirements of
13
camera installation and the privacy concerns raised by constant monitoring make this
approach unsuitable in many contexts.
Alternatively, pressure sensitive floor overlay was found to track location and hence
motion with a good degree of accuracy, for example using InfoFloor tiling (Murakita et
al., 2004) or TileTrack tiling (Valtonen et al., 2009), and was cheap on a per-unit basis.
However, it becomes expensive when covering a large area, and is not appropriate for
some floor surfaces. Also the sensors were large, and could not differentiate between
two persons standing closely together. Both InfoFloor and TileTrack are now defunct,
suggesting that it was not a commercially viable approach.
Alternative overlay
approaches such as Z-Tiles (Richardson et al., 2004) reduce the sensor size and thus
improve resolution, but at increased cost per area covered, especially when including
installation costs, and are still unsuitable for many surfaces. An underlay such as
SensFloor (Lauterbach et al., 2012) is both relatively cheap at scale, and of adequate
resolution for tracking of multiple persons, but however it requires that it be installed
under soft floor covering, which in the case of later installation means that the current
covering must be lifted and subsequently replaced.
Any implementation of a pressure-based localization system must consider the issues
with scaling and installation of proprioceptive sensors, with the result that it is generally
not suitable if flexibility or low cost are required.
More esoteric systems have also been developed.
Short-range radar systems are
available on the market, and allow the use of ultra-wideband radio to detect motion
through walls and other non-metallic obstacles. They are effective in doing so (Frazier
1996) and have been refined to become more accurate over the years (Xin et al., 2014),
but their high cost and specific utility limit their use to military, law enforcement and
disaster triage applications in which the cost can be justified. Sonar is also a viable
option for human motion tracking in specific contexts, as it is effective at moderate
range underwater (DeMarco et al., 2013), but in air the inaccuracy at range introduced
by the instability/variability of the physical medium exceeds current signal processing
14
capabilities (Schillebeeckx, 2012), and thus restricts the scanning area below that which
is useful.
In each case either the cost of the system (or system installation), inherent systematic
inflexibility, or the privacy concerns raised, make these methods unsuitable in nonspecialised spaces or environments. Therefore, in contexts where a low cost system is
required, and in which minimal setup requirements are valuable, RSSI-based DFPL may
offer a cheaper and more flexible alternative.
2.1.2 Passive RSSI Detection
As stated, the main advantage for this project of a passive localization system over an
active one is to remove the requirement of participation from observed subjects, thus
reducing susceptibility to the Hawthorne effect.
However, the passive systems
discussed above either do not satisfy this requirement (e.g. cameras), are expensive (e.g.
radar, SensFloor), are inflexible (e.g. most pressure sensors, sonar) or lack the required
resolution (e.g. tile overlays). Given that it has been shown that active RSSI localization
can satisfy all but the first requirement, it is desirable that a passive RSSI
implementation provide something approaching the utility, cost, and most importantly
accuracy, provided by RSSI in the active mode, in addition to the benefit of the passive
mode itself.
As stated above, the work of Kara and Bertoni (2006) showed that human presence can
have a large impact on RSSI values. To begin, detection of the presence of a human is a
simpler task than localization of that presence. Passive RSSI techniques have been
shown sufficient to this task many times – for example Mrazovac et al. (2013) used
calculations based on measurement of entropy inferred from RSSI values to detect
human presence with accuracy exceeding that of passive infra-red (PIR) sensors. In two
different scenarios, using indoor spaces with a normal mixture of heterogeneous
obstacles, their RSSI based detection outperformed PIR in each case, with PIR having a
detection rate of around 75% in each, whereas their RSSI detection achieved 100% and
91% respectively. A particular advantage of RSSI in their implementation was that it
could detect stationary human presence, in addition to motion, which PIR is specialized
15
towards. They further found, with regard to perturbation in RSSI levels, that “The
degree of variations is correlated with the level of human motion” (p. 300), suggesting
that in addition to bare detection, some details regarding motion can also be inferred.
Their findings are supported by the work of Wang et al. (2014), who note that when in
the vicinity of a Wi-Fi transceiver “the human body has a huge impact to the RSSI value”
(p. 4883). They find that a static human body occluding the path of transmission
between two nodes causes a decrease in the RSSI level which is stable. Additionally,
they find that entering or exiting this path is accompanied by a large fluctuation in this
level, with both a positive and negative component, a fact which is used by some studies
not discussed in this paper to aid in the classification of a motion event.
2.1.3 Passive RSSI Localization via Mapping/Fingerprinting
The foregoing are two studies out of many that show how RSSI can be used to great
effect to detect human presence. Determining a target’s location with precision is more
difficult however. A great many differing approaches have been attempted, with varying
levels of success, but the specific concept of Device Free Passive Localisation was first
discussed in paper by Youssef et al. (2007). After first testing whether RSSI data could
be used to infer presence, and finding that they could do so (twice, using two different
probabilistic calculations based on perturbations of the RSSI ground state of their test
environment) with 100% accuracy, their next step was to create a radio map of the
environment. This was stored as a histogram of the RSSI values at the receiving nodes
when a person is standing at each point in the map, with a separate version of the map
created for each node/AP pair. Now, when a person was standing at any given point in
their test area, they were able to use Bayesian inference to determine the probability
that they were at any of the points on the map, and localized the person at whichever of
these returned the highest probability. This method enabled them to localize with an
accuracy of around 0.2m, and although the paper does not give any measurements of
the test area, it comprised five rooms, a hallway, two transmitters and two receivers, for
which reason it seems safe to conclude that their results were impressively precise.
16
This is not to say that their experiment presents a suitable implementation of this
system for everyday use however. Several issues are raised in the paper without being
resolved.
1. Firstly, the radio map must be manually generated, which increases the setup
time proportional to the number of points recorded in the map.
2. Secondly, their map was created with a person standing each of only four points,
but it would be desirable for such a system to locate presence at any point within
the scanned area.
3. Thirdly, they did not investigate the issue of multi-occupancy of the space, which
would generate an entirely different RSSI pattern to single occupancy, and would
need to be handled by any practical implementation of such a system, especially
one aimed at detecting proximity of multiple individuals.
4. Fourthly, they raise the issue that changes in environmental conditions such as
heat, humidity and RF interference would decrease the accuracy of their base
RSSI map, which once again would need to be accounted for should this system
see real use.
Therefore this paper provides only proof of concept. The solution they suggest for each
of these four problems is to create a system wherein the RSSI map may be automatically
generated, and recreated as necessary in response to changing conditions. Youssef
returns to this question in (Aly and Youssef, 2013), in which they create a system which
takes as input a 3D model of the test environment, and then using known details of the
properties of the material construction of the environment and the specifications of the
sensing hardware automatically creates a map of the area. In this map, ray tracing
simulates RF propagation, and generates a prediction of the received RSSI values when a
human stands at one of 44 points within a 66m2 model apartment. In their most
successful configuration of this system, they were able to determine the location of an
individual with average error of 1.44m. However, this is only a partial solution to the
problem. Although the accuracy is still useful, though lower than that achieved in the
simpler first version of the experiment, this method only resolves two of the four stated
problems, and even then only partially.
17
The fact that the map is automatically
generated saves on the laborious process of manual mapping, but this is replaced with
the need to obtain an accurate 3D model of the environment, in which the materials of
each obstacle and feature are known. Secondly, the number of mapped points is
increased, but it is tuned to discrete points, rather than a continuous function. Also, the
model still only accounts for single occupancy, and it seems fair to imagine that
generation of additional persons would represent a geometrical increase in required
simulation time, and finally because it is generated in simulation, it cannot account for
changing environmental factors.
2.1.4 Passive RSSI Localization – Alternate Systems
Therefore, it is reasonable to conclude that this particular mapping/fingerprinting
technique is not fit for purpose as currently described. In addition, their system makes
use of expensive isotropic antennae to improve resolution, and relies in each iteration
upon a controlled environment for accuracy, neither of which would be suited to a lowcost every-day scenario. Therefore we shall consider two other proposed systems,
firstly RSSI Distribution Based Localization (RDL) (Liu et al., 2012), which investigates the
properties of 802.11 RF propagation to create an effective localization model,
addressing issues #1 and #2. Then we will consider the work of Deak et al. (2012), which
addresses multiple occupancy, issue #4.
RDL was conceived based upon the observation that effective DFPL systems to that point
all required a dense deployment of sensing nodes. Given that it is desirable to minimise
cost, dense deployment will often be unfeasible. Therefore they first investigated the
effect of human obstruction on RSSI, to determine how best to deploy resources for
detection.
They found that the propagation of an obstructed signal path can be
modelled in terms of diffraction, with two conclusions.
Firstly, that symmetrical
obstructions are of equal effect – that is to say, if an obstruction occurs at a certain
distance from the perpendicular line bisecting the path between two transceivers, and
so long as the obstruction is in either symmetrical position across that line, the
perturbation is of equal effect regardless of which side of the central the obstruction
resides. Secondly, that the signal path at the location of an obstruction can be described
as a series of Fresnel zones on a plane perpendicular to the shortest path of the signal.
18
Furthermore, they found that obstructions outside of the central Fresnel zone have
minimal effect upon RSSI, so long as at least 55% of the central zone is not occluded.
Varying position within the signal path therefore can be said to influence RSSI
proportional to the number of Fresnel zones occluded, which describes how the
variation in RSSI of an obstructed signal occurs, and with the majority of this variation
occurring when the majority of the shortest path of the signal is occluded.
This explains why a single link is not sufficient to perform localization – not only is the
scanning area small, but also the effects of occlusion are symmetrical. Therefore, they
conclude that multiple links are required, and that this number grows as the sensor area
increases. However, since they are able to quantify the sensor density appropriate to
the desired coverage, they were able to minimise the hardware requirements while
optimising the sensor placement configuration. With this, they gather RSSI data, model
the possible paths of obstruction detection between the links, and then estimate the
target’s position on each relevant path using Bayesian classification. With this system
they were able to produce a localization accuracy of around 70% using a 1m error
tolerance, rising to around 85% if the tolerance is set to 2m.
However, their approach does not address multiple occupancy. Deak et al. take a very
different approach to localisation to address this problem. In their experimental setup,
they use a much smaller sensor area than in other implementations, only ~10m squared,
surrounded by 10 wireless nodes in bidirectional mode. This relates to the increased
difficulty of detecting multiple as opposed to single occupancy. Then, after recording
RSSI data, they prune the dataset to remove node links which show the least variance in
testing, and use a multilayer Perceptron neural network to classify the data into three
sets – occupancy in one half of the sensor area, occupancy in the other half, and
occupancy in both. After training the network and validating its performance, their
system was able to correctly identify each of the three classes with ~90% accuracy.
2.3
Summary
It can be seen that RSSI based localization in the passive mode can produce accuracy
comparable to that of the active mode, and that multiple approaches have been
19
attempted in order to address the weaknesses inherent to inference of position data
from transmission in the RF medium.
Also it is evident that no single approach
adequately addresses these weaknesses to the extent that would allow employment of
this technology for robust human localization in a real world setting. However, the
differences in approach, both algorithmically and structurally, are not in every case
obviously mutually exclusive.
It is therefore hypothesized that a combination of
elements from the more successful implementations described could potentially allow
the creation of a system that could perform localization with sufficient accuracy for use
in the detection of cases in which two persons are within a certain distance of each
other, and from there to infer human interaction by further means.
20
3
Requirements Analysis and Specification
3.1
Problem Statement
Wi-Fi devices are ubiquitous in modern society. The RF transmissions that they use are
strongly affected by obstacles in their path. This puts the components of an obstacleaware sensing network into the hands of anyone with access to off-the-shelf Wi-Fi
nodes. Mobile phones are perhaps the most globally prevalent example of such a class
of device. The ability to track human activity is of use in many research and monitoring
contexts, with the ability to monitor human interaction being of particular interest in a
subset of these, but specialized hardware for this purpose is expensive. Therefore, it is
desirable to create an implementation of such a sensing network using easily accessible
mobile devices. However, the current state of the art in such implementation firstly does
not offer such tracking at the level of fidelity required for useful application, and
secondly has not combined more sophisticated techniques to improve fidelity with the
necessary functionality to detect multiple occupancy within a space. For this reason, it is
the purpose of this investigation to investigate how the techniques used in previous
studies can be combined to create a flexible and easily deployable system for the
detection of human interaction, using RSSI values to perform localization of persons
within the sensing area with sufficient accuracy to infer the possibility of their
interaction.
Also, as it is reasonable to assume that mobile devices immediately
available to any given group will not be homogenous, the implementation will attempt
to make use of a realistic variety of devices.
3.2
Lifecycle Model
The Waterfall model and Agile development both have features which make them
attractive for use in this project. On the one hand, the simple structure and well defined
flow of the Waterfall model would help stabilize what will necessarily be a messy
process of experimentation, increasing the probability of delivering a functioning final
version of this software. However, Agile is perhaps more suited to the necessary
amount of tinkering and change liable to be required in a project such with this, in which
the broad goals are known at the outset, but for which the individual steps to achieve
these goals must be determined through trial and error. Although it does increase the
21
risk of not producing an end result which is not fit for purpose, the purpose of this
project is to investigate the integration of several disparate approaches, and the
restricted flexibility of the Waterfall model may not allow time for all options to be
explored. Therefore, the model of choice for this project will be Agile. Each RSSI
modality can be incrementally developed, and then incrementally integrated, in order to
see which techniques work well in synthesis.
3.3
Requirements Analysis
3.3.1 Functional Requirements
The system must:

Collect RSSI data for each packet received

Classify which RSSI perturbations indicate the presence of a person in the sensing
area

Use these perturbations to locate the person present

Be able to distinguish between multiple persons within the sensor area

Determine the proximity of persons in the case of multiple occupancy

Activate the mobile device microphone when proximity is within a certain range

Determine whether the audio recorded contains speech

Use the combination of proximity and detected speech to classify incidences of
human interaction

Display these incidences in a human-readable way
3.3.2 Non-functional Requirements
The following hardware is required:

4+ heterogeneous Android devices running Android 2.2+ or a custom OS such as
Cyanogenmod

2 802.11 Wi-Fi access points

A development/coordination/datastore PC capable of running Matlab
The system must:
22

Store all RSSI values, localization/interaction incidences and recorded audio in a
database

Process collected RSSI values quickly so that audio recording can begin.

Process human interaction detection combining audio and localization in a
moderate amount of time – real-time operation is not required.

Be able to handle bursty RSSI data gracefully

Be able to accept RSSI and audio from any device of the types listed in the
hardware requirements
3.4
Functional Specification
3.4.1 Data Recording
Figure 3.1
3.4.2 Data Processing
The precise architecture for data processing is not known at this time. However, the
intention will be to test both Bayesian classification and neural networks. The final
implementation will most likely use Bayesian classification for localization and detection
23
of multiple occupancy, with inspection of multiple occupancy/recorded audio pairs
conducted using either Bayes or neural network. This is because neural network training
in one environment would tune detection to that environment, which would make the
system extremely inflexible, since neural network training is slow, and requires a
sizeable training data set.
However, it should be possible to abstract the
occupancy/audio pairs to the extent that it does not depend on environmental context –
input could be proximity, confidence, and audio. Proximity and confidence could be
supplied by the Bayesian algorithm, and then later processed by the neural network –
Matlab’s neural network pattern recognition tool should suffice for classification of
interaction events.
The only concern is how precisely to abstract the audio data – converting this into a
“conversation confidence” value may also be best handled by Bayesian classification, but
further study is required to determine whether this is the case. It would certainly not be
appropriate to feed raw audio data directly into a neural network.
24
4
Project Planning
4.1
Expected Outcomes and Deliverables
In terms of software, there are four deliverables. Firstly, RSSI detection software to
localise human subjects in a known environment, and infer proximity from multiple
localization events. Secondly, audio interpretation software to distinguish speech from
background sound. Thirdly, control software to use localisation to activate the audio
detection portion of the system when subjects are in close proximity. And fourthly,
software to synthesize the input from the RSSI & audio data streams to infer social
interaction.
In terms of process, the deliverables relate to the architecture and requirements of the
proposed system. Firstly, which devices are appropriate for this application? Secondly,
how they should best be arranged in the test environment? Thirdly, how should they be
tuned/calibrated to provide the highest degree of accuracy? Fourthly, which data
processing architecture is best suited to each stage of the process? And finally, does the
chosen optimal configuration classify interaction events with sufficient accuracy?
The outcomes will be a gauge of the feasibility of using consumer mobile devices for RSSI
& audio detection individually, the effectiveness of interpreting synthesized data from
each to infer human interaction, and a comparison of this approach to those that have
preceded it, examining whether the trade-offs inherent to this proposed system are
worthwhile with relation to accuracy.
4.2
Milestones
1. Interim report submission, 10/12/14.
2. Prototype complete, 08/04/14.
3. Final report, 01/05/14.
4. Viva, 25/05/14.
25
4.3
Time Allocation
4.4
Risks
Most of the risks relating to this project concern factors which may cause delays in
completion.
Given that the main contributor to this project works full time in a
challenging role in addition to undertaking this project, unavoidable work pressures may
make allocation of sufficient time to meet deadlines difficult at times. Also, the same
contributor is known to be complacent on occasion. There also exists the possibility that
illness or other uncontrollable factors may delay completion. Therefore, firstly a buffer
of 17 days is included in the project plan, to allow for delays. Secondly, it will be
necessary to obtain a firm commitment from the contributor’s employer to allow for
flexibility, especially in the second semester, when this project will be most taxing.
Another possible risk is that of data loss. This risk can be mitigated by keeping all project
code and documentation in a folder synched to a Dropbox account, providing a
constantly updated backup of all data. A final risk is the possibility that using mobile
phones in this way does not provide the granularity required to infer results of the type
desired. Although there is value in a negative result, it would be better to allow for
flexibility to find a positive result, and so during the initial literature review, time was
taken to establish whether there are any known issues with this sort of application of
mobile phone sensors, with effort taken to modify the project scope accordingly. It was
found that acceptable performance could be achieved despite each of the drawbacks of
the use of mobile devices in this fashion individually, and so it seems likely that synthesis
of these techniques may provide useful results. Finding that synthesis of these preexisting techniques is not possible would itself be a valuable result.
5
Design
5.1 Design Rationale
The specific use case scenario around which the design was constructed was that of
measuring social interaction in an elderly care facility. The following assumptions were
made:
1. The system will measure duration of social interaction of individual patients, one
per installation.
2. Therefore the installation must be extremely simple to both make and
subsequently unmake for reuse with another subject. Simple both in the sense
that it can be handled by untrained personnel, and also that it does not require
any outlay in making and unmaking.
3. Patients in these environments are sedentary. Personal rooms tend to have a
bed, and a number of chairs.
4. If a patient is bedridden, then this will be known by staff. If they are not, then it
is likely that they will have a preferred chair, and this should also be known by
staff.
5. The dispersal of Wi-Fi hubs is unknown, and likely to be highly variable from
facility to facility. Some may have none, in which case this proposed system is
unsuitable. In others, there may be many, or few, and their proximity to patient
rooms will also vary. However, their position will also be known by staff.
6. Therefore, the desired setup is that the sensing devices should be placed in
groups of no less than two, such that one lies at the end point of a line between
the patient and the nearest Wi-Fi hub, and that the second lies at the end point
of a line between the Wi-Fi hub and the likely position of any visitor or attending
nurse, when they are interacting with the patient. This ignores the issue of NonLine of Sight propagation intentionally, for reasons described later.
Therefore, the requirements are for a device-agnostic software system which can be
operated with as little user interaction as is possible. What must be determined is
whether such a system can offer useful data, firstly in ideal circumstances, and then in a
28
more realistic scenario. These parameters informed the selection of hardware and
design of software.
5.2
Hardware
Mobile phones are the chosen sensor platform for this system. The Android OS is
installed on a greater variety of phones than iOS, and has greater market share than
Windows Phone, and therefore was selected. Two devices were used – the first was a
four year old Samsung Galaxy S2 running Android 4.0.3. The second was a one year old
HTC One Mini 2 running Android 4.4.2. These are unremarkable, affordable devices
running standard Android software. This allows baseline testing of the hypothesis.
However, based on the assumption that newer devices are likely to have more
sophisticated/powerful Wi-Fi hardware, it also allows investigation of whether newer
phones are more suited to the task, and to what extent this difference applies.
The primary sending platform was a BT Home Hub 3.0, two years old, and again
representative of standard consumer hardware. A secondary hub was selected, a Sky
Hub SR101 to allow testing to eliminate varying hubs as a source of error.
5.3
Software
Three components are necessary in the sensor software.
1. RSSI level measurement.
The android.net.wifi.WifiManager class offers this
functionality.
2. Audio level measurement. Audio access without level balancing is provided by
the android.media.AudioRecord class. The MediaRecorder class is easier to
implement, but since it performs level balancing on the fly, it is not suitable for
discerning differences between audio levels, especially in a setting with
background noise. This buffered input then must be analysed to determine the
ambient volume. Since it is not only possible, but also desirable for reasons of
29
privacy, to do this without recording audio permanently, buffered data can be
discarded after analysis.
3. Filesystem access. The compiled data is simple, so holding it in an array and then
outputting to a plain comma separated text file will suffice. Since this is a onetime operation for each test iteration, there are no special considerations for
selection of an appropriate class, so OutputStreamWriter was chosen.
Finally, for convenience rather than necessity, a class to encapsulate each data point
(RSSI, ambient volume, timestamp) is desirable, and will be created for storage in a
java.util.ArrayList.
5.4
Architecture
Sensor data is collected from initialization until stopped and stored. This simplifies
operation. However, the sensors are still mediated through a timer, and they feed back
into the control portion. On user input, the timer is stopped, and the collected output is
saved.
30
Figure 5.1 – Architecture overview
The interface portion is very simple. All it does is show a live feed of the current sensor
data so that the user can verify that the device is operating, with a single button to stop
the recording and save the data to file.
The controller starts the timer so that data can be collected at regular intervals, takes
the data from the audio and RSSI sensors, and stores it in an array. It also initializes the
audio recording with appropriate parameters, and builds the necessary objects to collect
the RSSI data. Once recording is complete, it stops the timer and outputs to phone
storage.
The timer is very simple – it is set to cycle in sync with a phone’s internal Wi-Fi refresh
rate, which is around 2 seconds, and triggers the sensors on each cycle.
The sensors run simultaneously in a single thread. The RSSI value is pulled directly.
Audio is pulled into a buffer, and then the max amplitude in each recording segment is
calculated. The sensors then drop these values along with some housekeeping data into
an array managed by the controller.
Once recording is complete, the user triggers a control on the interface, which causes
the controller to kill the timer, and invoke output to dump the collected data into
external storage in the application’s public Files directory.
5.5
Class Diagram
Figure 5.2 – Class Diagram
31
As can be seen, the classes are not well organised. It is however functional. onCreate
initialises the audio sensors by calling getAudioParms, which returns an AudioRecord
object set up with the minimal level of fidelity and corresponding buffer size that is valid
on any given device. Once that has started recording, the Wi-Fi interface objects are
built, and once they are connected, the Timer starts. The run method is within the
Handler that the Timer is connected to, and it pulls data from the WifiInfo and
AudioRecord objects every 2000 milliseconds, with a timestamp on each from
getGoodTime, which converts a system time in milliseconds into a human readable
format. The onClick method stops the timer, calls the fileWrite method (which dumps
the ArrayList of DataPoint objects into a text file), and then operation is complete.
5.6
Interface
As far as the user is concerned, all activity takes place in a single motion – start the app,
hit stop when done.
Figure 5.3 - Interface
32
The data is updated live so that the user can confirm that the program is active. If there
is no active connection, this is also reported.
6
Implementation
6.1
Classes
6.1.1 Sensor Classes
The main class for audio is AudioRecord, but it needs quite a lot of setup to use – this is
because there is no guarantee of a universal format, since different devices support
different rates, channel setups, and PCM bit depths. Therefore pulling in constants from
AudioFormat and MediaRecorder classes is useful when performing initialisation.
RSSI collection requires a series of objects to be set up together.
First, a
ConnectivityManager provides access to the device’s network connectivity state – a
NetworkInfo object can be pulled from this, which allows the application to determine
whether Wi-Fi is connected. Then, a WifiManager object provides access to the Wi-Fi
connection itself. Pulling a WifiInfo object from this gives access to details of the
currently connected Wi-Fi service, including RSSI values.
6.1.2 Timing Classes
There are three – a Timer, a TimerTask, and a Handler. The Timer allows a task to be
scheduled on a thread at a fixed rate, once every so many milliseconds. The TimerTask
is a Runnable – it permits execution of code on a thread, and is passed into the Timer for
scheduling. Then the Handler gives access to the MessageQueue attached to the active
thread, allowing Runnables to be posted to it – the Handler does this from inside the
TimerTask.
6.1.3 Data Management Classes
It was convenient to store the recorded data in an ArrayList for ease of access –
ArrayLists are dynamically sized, so recording an unknown number of data points is
made simple, although this is only possible because of the created DataPoint class, since
ArrayLists can only store Objects.
33
Then, access to the filesystem is mediated through an OutputStreamWriter. It writes
bytes to the FileOutputStream that is passed to it, which in turn streams bytes to the
storage location specified in a File object which is passed to it. Since the file write is a
one-time operation per execution, and since the String data which is passed to it is
small, there is no need to use a buffer on the OutputStream.
6.1.4 Usability Classes
A few classes are used to improve readability of output.
Formatter contains a
deprecated method to convert the integer format IP address returned by a WifiInfo
object into a standard dot separated IPv4 address. This class cannot handle IPv6
addresses, but since the IP address is output only for verification of the connected hub
(alongside the equally useful SSID), no extra time was spent making this feature
bulletproof.
Then, SimpleDateFormat allows the internal “epoch time” in milliseconds to be
converted into a human-readable day/hour/minute/second/millisecond format, for
timestamping of individual datapoints.
Finally, for output, Toast gives the user a popup for feedback to relate the
successfulness of operations, and the Log class allows data to be dumped to the system
log for debugging purposes.
6.2
Code
This section breaks down the methods used in the application, including code samples
for the significant parts. Full code can be seen in Appendix 1. The first method to be
called when the application starts is onCreate, and it performs a lot of setup. Firstly, it
binds the TextViews and Button to their respective XML IDs.
It also starts the
OnClickListener – this is implemented by the MainActivity class, and it allows the button
to receive click events.
Next, onCreate initialises and starts the AudioRecord object:
try {
34
recorder = getAudioParms();
recorder.startRecording();
}catch (IllegalArgumentException e){
e.printStackTrace();
recorder = null;
Toast.makeText (this, "Incompatible Device",
Toast.LENGTH_LONG).show();
}
If the try block is successful, then the method immediately calls startRecording on the
returned AudioRecord object. The catch block sets the AudioRecord object “recorder”
to null – this means that any error in this step is unrecoverable. However, an error does
occur in this step, it means that the getAudioParms method called at the top has been
unable to find a valid encoding format, thus the device is incompatible.
The
getAudioParms method itself consists of several loops. In order for it to return an
initialised AudioRecord object. The top loop runs over a set of predefined sampling
rates starting at 8000Hz. Within this, it loops over two valid bit depths, 8 and 16, and
then the inner loop runs over the two possible channel configurations, mono and stereo.
These latter two loops pull the short integer values that represent the respective
properties from the AudioFormat class. Then, it tries to calculate the buffer size that
each of the iterations would represent for an AudioRecord object:
int bufferSize = AudioRecord.getMinBufferSize(rate, channelConfig,
audioFormat);
if (bufferSize != AudioRecord.ERROR_BAD_VALUE) {
BufferElements2Rec = bufferSize/audioFormat;
AudioRecord recorder = new
AudioRecord(MediaRecorder.AudioSource.DEFAULT, rate, channelConfig,
audioFormat, bufferSize);
if (recorder.getState() == AudioRecord.STATE_INITIALIZED) return
recorder;
If the configuration in a given iteration is bad, it will set the buffer size to -2. If the
calculated value does not correspond to this, then the combination of rate, channel and
35
format is good, and the method will then build, test, and return the AudioRecord object
“recorder” to the onCreate method.
The onCreate will then set up most of the Wi-Fi sensing objects:
ConnectivityManager cManager = (ConnectivityManager)
getSystemService(CONNECTIVITY_SERVICE);
nInfo = cManager.getNetworkInfo(ConnectivityManager.TYPE_WIFI);
wManager = (WifiManager)getSystemService(Context.WIFI_SERVICE);
The ConnectivityManager and WifiManager can be pulled directly from the Context by
calling getSystemService, which returns objects of the required type based on the
constants passed to it. The ConnectivityManager is used to create a NetworkInfo object,
which contains information about the currently connected Wi-Fi network. This pair of
objects functions only to later call isConnected from the NetworkInfo object, but this is
necessary as the application must have an active Wi-Fi connection in order to function.
Finally, onCreate starts the Timer:
myTimer = new Timer();
myTimer.schedule(new TimerTask() {
@Override
public void run() {
hTime.post(pullData);
}
}, 0, 2000);
The Timer is instantiated, and a TimerTask is passed to its Schedule method – the Run
method of the TimerTask is overridden to invoke the Post method of the Handler
“hTime”, which itself is passed the pullData method which actually contains the sensing
operations, and sets them to run immediately, and every 2000 milliseconds
subsequently.
The Runnable pullData will only function if the phone is connected, and if the bStop
Boolean has not been toggled by the user clicking on btnStop. Assuming that this is the
case, every two seconds pullData will build the remaining Wi-Fi data object WifiInfo
from the pre-created WifiManager object:
WifiInfo wInfo = wManager.getConnectionInfo();
36
int myIp = wInfo.getIpAddress();
String sIP = Formatter.formatIpAddress(myIp);
String sSSID = wInfo.getSSID();
String sRSSI = String.valueOf(wInfo.getRssi());
It can then pull IP, hub SSID and current RSSI dBm from this object, refreshed every time
pullData runs. It would perhaps have been preferable to reduce the interval this runs
on, but sadly the WifiManager refresh rate seems to be hardcoded – even calling
startScan from WifiManager does not increase the rate this value is updated.
Then pullData does some housekeeping and displays the calculated values, before
moving on to the audio analysis:
double dAmpMax = 0;
sData = new short[BufferElements2Rec];
recorder.read(sData, 0, BufferElements2Rec);
if(sData.length != 0){
for(int i=0; i<sData.length; i++){
if(Math.abs(sData[i])>=dAmpMax){
dAmpMax=Math.abs(sData[i]);
}
}
}
else Log.d(TAG, "Buffer empty");
It reads from the recorder buffer into sData, and then iterates over each value in the
buffer, calculating the maximum amplitude in the entire buffered segment.
All the collected values for this iteration of pullData are then dropped into the ArrayList
of DataPoint objects lScans:
lScans.add(new DataPoint(sTime, sRSSI, sSSID, dAmpMax));
And at this point the Timer waits for another two seconds before starting over again.
37
The btnStop Button mentioned above works together with the overridden onPause,
since this is a quick and easy way of packaging the necessary functions to stop the
AudioRecord and Timer objects:
public void onClick(View v) {
if (v == btnStop) {
bStop = true;
if(fileWrite(sRecID)){
this.onPause();
}else Toast.makeText (this, "Save Failed",
Toast.LENGTH_SHORT).show();
this.onPause();
}
}
@Override
public void onPause(){
if(recorder != null){
recorder.stop();recorder.release();recorder=null;myTimer.cancel();
Log.e(TAG, "Cleanup.");
}
super.onPause();
}
(I recognize that this is not an optimal way to code this section, but it was low priority.)
The onClick bound to btnStop calls fileWrite, passing in a String containing a device
identifier and timestamp, previously built in onCreate. Assuming that this operation
returns True, then the file write has been successful, and it calls onPause to kill the
AudioRecord and Timer objects. This is handled in onPause because in development
testing, both the AudioRecord and the WifiManager objects would behave strangely
when in the background. In normal operation it should hopefully not be necessary to
remove the application from the foreground until the data collection is complete.
The final code of any import is that which writes the DataPoint object ArrayList to file –
the DataPoint class itself is merely a wrapper for the collected sensor values and
timestamp. Writing to file is handled as follows:
38
File file = new File(getExternalFilesDir(null), s+".txt");
file.createNewFile();
FileOutputStream fOut = new FileOutputStream(file);
OutputStreamWriter outWriter = new OutputStreamWriter(fOut);
outWriter.append(lScans.toString());
outWriter.close();
fOut.close();
The File object is created in the application’s external storage Files directory, with a
filename determined by whatever string is passed to fileWrite “s”, as a text file. Then
the file is created, the FileOutputStream is pointed to this location, the
OutputStreamWriter is passed this byte stream, and then appends the contents of the
DataPoint ArrayList to that location. Append is used, although since the String “s”
contains what should be a unique timestamp, it is not necessary to do so.
7
Results and Evaluation
The following tests were carried out corresponding to the test plans below, in each case
with the devices oriented face up on flat surfaces, their bottom edge towards the hub:
Figure 7.1 – Test One – Chairs One Metre from Device
39
Figure 7.2 – Test Two – Chairs 0.5 Metres from Device
Figure 7.3 – Tests Three and Four – Targets 0.5m from Device – Hub in Adjoining Room
40
In the graph plotted from each set of test results, the solid line represents the ambient
volume, plotted against the right axis, and the dotted line represents the RSSI level,
plotted against the left axis. Each tick on the horizontal axis represents two seconds.
7.1
Test One – Same room, persons present 1m from device.
Samsung Left
-30
1
3
5
7
-35
-40
-45
-50
20000
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 18000
16000
14000
12000
10000
8000
6000
4000
2000
0
HTC Right
-30
1
-35
-40
-45
-50
3
5
7
20000
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 18000
16000
14000
12000
10000
8000
6000
4000
2000
0
In this test the room is quiet to begin, and no persons are present in the testing areas –
see figure 7.1. From tick 5 to tick 20, a person is seated in the left testing area. From
tick 20 to tick 35 a person is seated in the right testing area. As can be seen from the
dotted line, there is no appreciable deviation in the trend during these periods, whereas
the expectation would be to see a decrease in RSSI during these periods. Indeed, the
trend in the HTC device is marginally in the opposite direction to that which is predicted.
Also on the chart, the solid line represents audio levels – for ten seconds during each
seated period, the test subject spoke in the direction of the device they were seated
beside. This shows that at least this portion of the test showed a positive result, with
the proviso that the Samsung microphone was found to be highly directional – during
the period that the subject was seated next to, and speaking towards, the HTC, the
Samsung registered no variance from the ambient level. Finally, at the end of the test,
the subject moved away from both test areas, and spoke towards a point between each
device – in this, the results show a corresponding peak.
41
7.2
Test Two – Same room, persons present 0.5m from device.
Samsung Left
-30
-32
-34
-36
-38
-40
-42
-44
-46
-48
-50
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
20000
18000
16000
14000
12000
10000
8000
6000
4000
2000
0
HTC Right
-30
-32
-34
-36
-38
-40
-42
-44
-46
-48
-50
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
20000
18000
16000
14000
12000
10000
8000
6000
4000
2000
0
This test was carried out in exactly the same way as the first test, with the only
difference being the distance between subject and device is reduced to 0.5m – see
Figure 7.2. In this experiment the decrease in RSSI values between 5 and 20 for Left, and
between 20 and 35 for Right, is now noticeable. It is far more pronounced in the older
Samsung device, which at first seems surprising, but on reflection, it perhaps implies a
more direction-sensitive antenna in the older device, which is in itself a somewhat useful
result.
Again, the audio data shows correlation, again with pronounced directionality suggested
by the Samsung device. However, there is a point of some concern – the audio peak in
the HTC device is substantially delayed. Although the test audio timings were not
absolutely rigorous, the HTC data has an onset delay in the order of 10 seconds. One
possible explanation for this is that there is an issue with how audio buffering is handled.
In testing during development, there were occasional small delays in the recorded audio
levels, which may be explained by both audio and RSSI processing running in a single
thread, although confirming this would take reworking of the code.
42
7.3
Test Three – Adjoining room, persons 0.5m from device.
The results for the following tests are represented differently. These were carried out
with the hub located in an adjoining room – see figure 7.3. In this figure, as in those
which preceded it, heavy lines represent walls or closed doors. The finer line represents
the outline of a bed, with the targets representing positions for seated figures. Since the
audio portion has been shown to work (with some reservations) further relation of
testing results in differing environments seemed unnecessary, since audio volume is only
minimally affected by such circumstances. Therefore, both sets of RSSI results are
shown in a single graph:
Combined Results - Samsung Solid, HTC Dashed
-30
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
-40
-50
-60
-70
Given the awkwardness of moving in a more confined environment, the timings were
changed slightly, with a person present in the target corresponding to the solid Samsung
line between tick 8 and tick 20, and present in the area corresponding to the dashed
HTC line between tick 20 and tick 32. These tick values were established by making note
of the changes in the on screen RSSI values when in proximity to each device, and then
locating the corresponding data points in the recorded data. It can be seen that in this
case the overall RSSI levels are decreased in the unobstructed state by approximately
10dBm on average, but that otherwise the response appears somewhat the same as in
the same-room scenario, with two noticeable differences. Firstly, the response from the
HTC is far more pronounced. The phone orientations remain the same, face up on a flat
surface with the bottom edge pointed towards the hub. Therefore it seems strange that
the results should be reversed, since in the previous test the Samsung was more
sensitive.
One possible explanation for this is the multipath effect.
43
The second
noticeable difference supports this to some extent – attenuation of the HTC signal
begins when the subject is proximate to the Samsung device, despite this leaving the
imaginary direct line between HTC and hub unobstructed. This may also be supported
by the suggested increased directionality of the Samsung device. Given that the HTC
may be more receptive to incoming wave paths which deviate from its axis, an obstacle
to a reflected ray coming in at a non-axial path would have a greater effect on the
reported RSSI. It does not however explain why the overall sensor drop in the HTC is so
much more pronounced. That the overall result should be the reverse of the case in the
prior experiment has been discussed, but there is no readily apparent explanation for
the greatly increased magnitude of effect. In the same room experiment at 0.5m range
from the device the drop was in the region of ~5dBm from the resting state, whereas in
the adjoining room it is ~10dBm, which given that it is a logarithmic scale implies a much
greater proportional attenuation. Perhaps an observer with a greater understanding of
signal propagation mechanics could suggest an explanation.
7.4
Test Four – As test three, different hub
Different Hub, Adjoining Room - Samsung Solid, HTC
Dashed
-50
-52
-54
-56
-58
-60
-62
-64
-66
-68
-70
1 2 3 4 5 6 7 8 9 1011121314151617181920212223242526272829303132333435363738394041
Finally one last test was carried out, using the same test parameters as test three,
except using a Sky Hub SR101, rather than the BT Home Hub 3.0 used in all other tests.
Unfortunately test conditions besides this were not ideally controlled, as it was not
possible to bring the Sky Hub to the same test environment – however, it was arranged
as closely to the previous test as possible, with objects lying in the same relation to each
other and carried out in the same format. This lack of control limits the usefulness of
the results obtained, but sadly circumstances did not allow for perfect replication.
44
Therefore the data is only weakly suggestive. From tick 5 to tick 20, a subject was
proximate to the Samsung device. From tick 20 to tick 35, a subject was proximate to
the HTC device. Some correlation is visible in these segments, with a greater response in
the Samsung device this time, and much greater overall variability in the recorded
values. It is impossible to say given imperfect conditions what this means – the only
conclusion that can be drawn is that some response will be present when using a
different hub and a different environment.
Full results from each test can be seen in Appendix 2.
7.5
Evaluation Comments
Overall performance of the application was disappointing. Other tests carried out using
the “1m range from device” test parameters were not included because they showed
the same complete lack of correlation seen in Test One. More testing of the effects of
different environments and different hubs in more properly controlled environments
would have been valuable, but was not possible due to time constraints. Also, more
robust testing to determine the behaviour in non-line-of-sight propagation scenarios
would be necessary to properly test the viability of this approach, given that this would
be the most common scenario in the envisioned use case, that of social interaction
monitoring in elderly care settings, but again, time did not permit sufficient testing.
Therefore the results are at best suggestive, but at least not without some value.
8
Conclusions and Future Improvements
The final software product fell far short of the initially stated requirements. The sensing
application, while imperfectly implemented, provided data that appears in ideal
circumstances to at least fit the expectations of the model – the presence of a person in
close proximity to a mobile device at a point intersecting a line between the device and
a connected hub produces a measureable decrease in RSSI levels, and when this is
combined with collection of amplitude maxima from device microphones, could be used
to infer social interaction in these specific conditions.
However, the method as stands lacks the flexibility needed for use in the care setting
envisioned. The zone of sensor sensitivity is too small to accurately measure the extent
45
of real world social interaction in what would be a much more fluid and uncontrolled
setting than reflected by the tests.
Furthermore, the back end data interpretation software initially proposed is entirely
absent in this implementation.
While it seems likely that the total lack of any
perceptible correlation between human presence and RSSI levels at the 1m range from
device would not be rendered manageable by sophisticated data interpretation
algorithms, at least some attempt to perform more rigorous analysis of the data in these
settings would have been greatly desirable. Sadly, time did not permit such an analysis,
which would have been necessary to carry out the intended human localization
component, rather than the less sophisticated detection of which this software was
capable.
However, the study did at least bear some fruit. Firstly, it confirms the hypothesis that
mobile device sensors are capable of detecting fluctuations in RSSI levels correlating to
the presence of a person in the environment.
Secondly, if one accepts the combination of human proximity and an increase in
ambient noise amplitude maxima as a valid metric for detecting human interaction, it
suggests that the addition of a sound sensing component to such a system would be
valuable.
Thirdly, it suggests that there is variability in the sensing behaviour of different devices,
and in the propagation characteristics of different hubs. This on the one hand presents
a challenge to anyone attempting to create a system based on the basic assumption that
anyone trying to create an ad-hoc low cost sensing network of this type will have to
make do with whatever hubs and devices are available at hand, since the fidelity of any
results would be uncertain until tested on-site. On the other hand, it does suggest a
possible improvement given a more selective approach to hardware. There is likely to
be a class of low cost mobile devices that returns human presence correlated RSSI data
with high fidelity for a given test setting. There is likewise likely to be a class of low cost
consumer Wi-Fi hub that exhibits “useful” Wi-Fi propagation characteristics in a like
variety of test settings.
46
These conclusions in turn suggest a broad variety of future work that could be useful in
developing a practicable implementation of the basic human interaction measuring
model attempted in this work.
Firstly, testing of the system as-is using a much greater variety of devices, hubs and
environments would much better describe the parameters of usability of such a system.
It is likely that there are “ideal” combinations of each that are effective in differing
scenarios, and it is possible that one might find a combination of hub and sensing device
that would be effective in a broad range of environments, which is the primary
requirement for broad use of such a system.
Secondly, development of analytic software based upon these results could well prove
fruitful. A system using Bayesian inference may allow for detection of human presence
at much greater ranges, since the limitation of “fingerprinting” results (that they test for
presence at specific locations rather than directly inferring position from data) could be
relaxed if the parameters of such inference were limited to detection of presence only
rather than localisation.
Thirdly, it is observed from the raw results that RSSI data from multiple hubs is available
directly from the implemented code – the WifiManager object contains this. Inclusion of
such data into the inference portion of any analytic software could present a much
greater fidelity if the implementation was appropriately tuned. However, this would
present a concomitant stricture upon the requirements of any such project – in a real
world setting, the relation of hub positioning, sensing areas of interest, and viable device
positions, would require that any implementation seeking to take advantage of such
features would need to be many orders of magnitude more flexible than that contained
herein.
Finally, and for the use case in which the presently limited features of the implemented
system would be useful for generating a rough approximation of the duration of human
interaction in a limited set of test configurations, there are many improvements that
could be made to the user interface. More descriptive and less technical user feedback
would be advantageous. Smoother user interface operation would make it much more
47
viable for general use in a care scenario. Code refactoring based on better principles
could likely improve efficiency and sensor fidelity. Even some automation of data
interpretation based on simple Excel macros would be extremely useful if the system
were to be used by someone unfamiliar with IT.
The limitations of the given
implementation are readily apparent, and could be ameliorated easily given a little time.
Also, one last observation – RSSI is a value calculated by the device hardware. Since it is
a derived value, it is perhaps not ideal. This, combined with the apparent limitation of
sensing every two seconds, suggests that access to a value obtained from closer to the
physical layer and with greater control of the sensing regime might be of value in any
future investigation of this topic.
48
References
Aly, H. and Youssef, M., 2013. New Insights into Wifi-based Device-free Localization. In:
Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing
Adjunct Publication. Zurich, Switzerland. New York, NY, USA: ACM, 541-548.
Beleznai, C., Fruhstuck, B. and Bischof, H., 2005. Human tracking by mode seeking. In:
Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th
International Symposium on. 1-6.
Cai, Q. and Aggarwal, J.K., 1996. Tracking human motion using multiple cameras. In:
Pattern Recognition, 1996., Proceedings of the 13th International Conference on. 68-72
vol.3.
Chen, D., Malkin, R. and Yang, J., 2004. Multimodal Detection of Human Interaction
Events in a Nursing Home Environment. In: Proceedings of the 6th International
Conference on Multimodal Interfaces. State College, PA, USA. New York, NY, USA: ACM,
82-89.
Deak, G., Curran, K., Condell, J. and Deak, D. 2014. Detection of multi-occupancy using
device-free passive localisation. Wireless Sensor Systems, IET, 4 (3), 130-137.
DeMarco, K.J., West, M.E. and Howard, A.M., 2013. Sonar-Based Detection and Tracking
of a Diver for Underwater Human-Robot Interaction Scenarios. In: Systems, Man, and
Cybernetics (SMC), 2013 IEEE International Conference on. 2378-2383.
Frazier, L.M. 1996. Surveillance through walls and other opaque materials. Aerospace
and Electronic Systems Magazine, IEEE, 11 (10), 6-9.
Haijing, W., Wei, W. and Huanshui, Z., 2014. Human body detection of Wireless Sensor
Network based on RSSI. In: Control and Decision Conference (2014 CCDC), The 26th
Chinese. 4879-4884.
Harsha, B.V., 2004. A noise robust speech activity detection algorithm. In: Intelligent
Multimedia, Video and Speech Processing, 2004. Proceedings of 2004 International
Symposium on. 322-325.
Jiing-Yi, W., Chia-Pang, C., Tzu-Shiang, L., Cheng-Long, L., Tzu-Yun, L. and Joe-Air, J.,
2012. High-Precision RSSI-based Indoor Localization Using a Transmission Power
Adjustment Strategy for Wireless Sensor Networks. In: High Performance Computing and
Communication & 2012 IEEE 9th International Conference on Embedded Software and
Systems (HPCC-ICESS), 2012 IEEE 14th International Conference on. 1634-1638.
Kara, A. and Bertoni, H.L. 2006. Effect of people moving near short-range indoor
propagation links at 2.45 GHz. Communications and Networks, Journal of, 8 (3), 286-289.
Khan, S., Javed, O., Rasheed, Z. and Shah, M., 2001. Human tracking in multiple cameras.
In: Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference
on. 331-336 vol.1.
49
Landsberger, H.A., 1958. Hawthorne revisited: management and the worker: its critics,
and developments in human relations in industry. Ithaca, N.Y.: Cornell University.
Lauterbach, C., Steinhage, A. and Techmer, A., 2012. Large-area wireless sensor system
based on smart textiles. In: Systems, Signals and Devices (SSD), 2012 9th International
Multi-Conference on. 1-2.
Lui, G., Gallagher, T., Binghao Li, Dempster, A.G. and Rizos, C., 2011. Differences in RSSI
readings made by different Wi-Fi chipsets: A limitation of WLAN localization. In:
Localization and GNSS (ICL-GNSS), 2011 International Conference on. 53-57.
Luo, X., O’Brien, W.J. and Julien, C.L. 2011. Comparative evaluation of Received SignalStrength Index (RSSI) based indoor localization techniques for construction jobsites.
Advanced Engineering Informatics, 25 (2), 355-363.
Matic, A., Osmani, V., Maxhuni, A. and Mayora, O., 2012. Multi-modal mobile sensing of
social interactions. In: Pervasive Computing Technologies for Healthcare
(PervasiveHealth), 2012 6th International Conference on. 105-114.
Matic, A., Osmani, V. and Mayora-Ibarra, O. 2012. Analysis of Social Interactions
Through Mobile Phones. Mobile Networks and Applications, 17 (6), 808-819.
Mrazovac, B., Todorovic, B.M., Bjelica, M.Z. and Kukolj, D., 2013. Reaching the next level
of indoor human presence detection: An RF based solution. In: Telecommunication in
Modern Satellite, Cable and Broadcasting Services (TELSIKS), 2013 11th International
Conference on. 297-300.
Murakami, T., Matsumoto, Y., Fujii, K., Sugiura, A. and Yamanaka, Y., 2003. Propagation
characteristics of the microwave oven noise interfering with wireless systems in the 2.4
GHz band. In: Personal, Indoor and Mobile Radio Communications, 2003. PIMRC 2003.
14th IEEE Proceedings on. 2726-2729 vol.3.
Murakita, T., Ikeda, T. and Ishiguro, H., 2004. Human tracking using floor sensors based
on the Markov chain Monte Carlo method. In: Pattern Recognition, 2004. ICPR 2004.
Proceedings of the 17th International Conference on. 917-920 Vol.4.
Qian, D., and Dargie, W., 2012. Evaluation of the reliability of RSSI for indoor localization.
In: Wireless Communications in Unusual and Confined Areas (ICWCUCA), 2012
International Conference on. 1-6.
Richardson, B., Leydon, K., Fernstrom, M. and Paradiso, J.A., 2004. Z-Tiles: Building
Blocks for Modular, Pressure-sensing Floorspaces. In: CHI '04 Extended Abstracts on
Human Factors in Computing Systems. Vienna, Austria. New York, NY, USA: ACM, 15291532.
Schillebeeckx, F., Vanderelst, D., Reijniers, J. and Peremans, H. 2012. Special section on
biologically-inspired radar and sonar systems - Evaluating three-dimensional localisation
information generated by bio-inspired in-air sonar. Radar, Sonar & Navigation, IET, 6 (6),
516-525.
50
Seifeldin, M., Saeed, A., Kosba, A.E., El-Keyi, A. and Youssef, M. 2013. Nuzzer: A LargeScale Device-Free Passive Localization System for Wireless Environments. Mobile
Computing, IEEE Transactions on, 12 (7), 1321-1334.
Valtonen, M., Maentausta, J. and Vanhala, J., 2009. TileTrack: Capacitive human tracking
using floor tiles. In: Pervasive Computing and Communications, 2009. PerCom 2009. IEEE
International Conference on. 1-10.
Xin, L., Dengyu, Q., and Ye, L., 2014. Macro-motion detection using ultra-wideband
impulse radar. In: Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual
International Conference of the IEEE. 2237-2240.
Yadong, W., and Yan, L., 2000. Robust speech/non-speech detection in adverse
conditions using the fuzzy polarity correlation method. In: Systems, Man, and
Cybernetics, 2000 IEEE International Conference on. 2935-2939 vol.4.
Youssef, M., Mah, M. and Agrawala, A., 2007. Challenges: Device-free Passive
Localization for Wireless Environments. In: Proceedings of the 13th Annual ACM
International Conference on Mobile Computing and Networking. Montr\&\#233;al,
Qu\&\#233;bec, Canada. New York, NY, USA: ACM, 222-229.
51
Appendix 1 – Code
package com.dougan.rssitrack;
import android.content.Context;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.net.ConnectivityManager;
import android.net.NetworkInfo;
import android.net.wifi.WifiInfo;
import android.net.wifi.WifiManager;
import android.os.Bundle;
import android.os.Handler;
import android.support.v7.app.ActionBarActivity;
import android.text.format.Formatter;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.Button;
import android.widget.TextView;
import android.widget.Toast;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.Timer;
import java.util.TimerTask;
public class MainActivity extends ActionBarActivity implements
View.OnClickListener{
private TextView textConnected, textIP, textSSID, textRSSI,
52
textTime;
private Button btnStop;
private final Handler hTime = new Handler(); //Handler for timer
private final String TAG = "RSSITrack";
//Tag for debug
private ArrayList<DataPoint> lScans = new ArrayList<>();
private Timer myTimer;
private String sRecID;
private NetworkInfo nInfo;
private WifiManager wManager;
private AudioRecord recorder = null;
private static final int[] intSampleRates = new int[] { 8000, 11025,
22050, 44100 };
private int BufferElements2Rec;
short sData[];
boolean bStop = false;
//
Galaxy S2 - Success at 8000Hz, bits: 2, channel: 16,
format: 2, buffer: 1024
//
One Mini 2 - Success at 8000Hz, bits: 2, channel: 16,
format: 2, buffer: 640
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textConnected = (TextView)findViewById(R.id.Connected);
textIP = (TextView)findViewById(R.id.IP);
textSSID = (TextView)findViewById(R.id.SSID);
textRSSI = (TextView)findViewById(R.id.RSSI);
textTime = (TextView)findViewById(R.id.Time);
btnStop = (Button) this.findViewById(R.id.btnStop);
btnStop.setOnClickListener(this);
53
try {
recorder = getAudioParms();
recorder.startRecording();
//Log.d(TAG, "Recorder state " +
recorder.getRecordingState());//3 is good
}catch (IllegalArgumentException e){
e.printStackTrace();
recorder = null;
Toast.makeText (this, "Incompatible Device",
Toast.LENGTH_LONG).show();
}
ConnectivityManager cManager = (ConnectivityManager)
getSystemService(CONNECTIVITY_SERVICE);
nInfo = cManager.getNetworkInfo(ConnectivityManager.TYPE_WIFI);
wManager = (WifiManager)getSystemService(Context.WIFI_SERVICE);
String uid = android.os.Build.SERIAL;
sRecID = "Dev"+uid+"Time"+System.currentTimeMillis();
myTimer = new Timer();
myTimer.schedule(new TimerTask() {
@Override
public void run() {
hTime.post(pullData);
}
}, 0, 2000);
//Find a way to increase scan rate?
}
public void onClick(View v) {
if (v == btnStop) {
bStop = true;
if(fileWrite(sRecID)){
this.onPause();
}else Toast.makeText (this, "Save Failed",
54
Toast.LENGTH_SHORT).show();this.onPause();
}
}
@Override
public void onPause(){
if(recorder != null){
recorder.stop();recorder.release();recorder=null;myTimer.cancel();
Log.e(TAG, "Cleanup complete.");
}
super.onPause();
}
final Runnable pullData = new Runnable() {
public void run() {
if (nInfo.isConnected()&&!bStop){
//Log.d(TAG, wInfo.toString());
//Debug print of wifi
info
//Log.d(TAG,
wManager.getScanResults().toString());//Multiple hub support?
textConnected.setText("Connected");
WifiInfo wInfo = wManager.getConnectionInfo();
int myIp = wInfo.getIpAddress();
String sIP = Formatter.formatIpAddress(myIp);//Will not
work with IPV6, not essential
String sSSID = wInfo.getSSID();
String sRSSI = String.valueOf(wInfo.getRssi());
//Log.d(TAG,sRSSI);
long lTime = System.currentTimeMillis();
String sTime = getGoodTime(lTime);//Not here for a long
55
time
textIP.setText(sIP);
textSSID.setText(sSSID);
textRSSI.setText(sRSSI);
textTime.setText(sTime);
double dAmpMax = 0;
sData = new short[BufferElements2Rec];
recorder.read(sData, 0, BufferElements2Rec);
if(sData.length != 0){
for(int i=0; i<sData.length; i++){
if(Math.abs(sData[i])>=dAmpMax){
dAmpMax=Math.abs(sData[i]);
}
}
}
else Log.d(TAG, "Buffer empty");
lScans.add(new DataPoint(sTime, sRSSI, sSSID, dAmpMax));
//Log.d(TAG, lScans.toString());
//Log.d(TAG, sDataString);
}
else{
Log.d(TAG, nInfo.toString());
textConnected.setText("Not Connected");
textIP.setText("Not Connected");
textSSID.setText("Not Connected");
textRSSI.setText("Not Connected");
}
}
};
public boolean fileWrite(String s){
try {
File file = new File(getExternalFilesDir(null), s+".txt");
56
file.createNewFile();
FileOutputStream fOut = new FileOutputStream(file);
OutputStreamWriter outWriter = new OutputStreamWriter(fOut);
outWriter.append(lScans.toString());
outWriter.close();
fOut.close();
Toast.makeText(getApplicationContext(),"Saved",Toast.LENGTH_LONG).show()
;
return true;
} catch (FileNotFoundException e) {
e.printStackTrace();
return false;
}
catch (IOException e) {
e.printStackTrace();
return false;
}
}
public AudioRecord getAudioParms() {
for (int rate : intSampleRates) {
for (short audioFormat : new short[] {
AudioFormat.ENCODING_PCM_8BIT, AudioFormat.ENCODING_PCM_16BIT }) {
for (short channelConfig : new short[] {
AudioFormat.CHANNEL_IN_MONO, AudioFormat.CHANNEL_IN_STEREO }) {
try {
Log.d(TAG, "Attempting rate "+rate+", bits:
"+audioFormat+", channel: "+channelConfig);
int bufferSize =
AudioRecord.getMinBufferSize(rate, channelConfig, audioFormat);
if (bufferSize != AudioRecord.ERROR_BAD_VALUE) {
Log.d(TAG, "Success at rate "+rate+", bits:
"+audioFormat+", channel: "+channelConfig+", format: "+audioFormat+",
57
buffer: "+bufferSize);
BufferElements2Rec = bufferSize/audioFormat;
AudioRecord recorder = new
AudioRecord(MediaRecorder.AudioSource.DEFAULT, rate, channelConfig,
audioFormat, bufferSize);
if (recorder.getState() ==
AudioRecord.STATE_INITIALIZED) return recorder;
}
} catch (Exception e) {
Log.e(TAG, rate + "Invalid parms",e);
}
}
}
}
return null;
}
/* private boolean externalStorageAvailable() {
return
Environment.MEDIA_MOUNTED
.equals(Environment.getExternalStorageState());
}*/
public String getGoodTime(long millis)
{
if(millis < 0) throw new IllegalArgumentException("Invalid
duration");
return (new SimpleDateFormat("D:mm:ss:SSS")).format(new
Date(millis));
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is
present.
58
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
//noinspection SimplifiableIfStatement
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
}
package com.dougan.rssitrack;
public class DataPoint {
String RSSI, SSID, time;
double ampMax;
public DataPoint(String time, String level, String SSID, double
ampMax){
this.RSSI = level;
this.time = time;
this.SSID = SSID;
this.ampMax = ampMax;
}
@Override
public String toString() {
59
StringBuilder sb = new StringBuilder();
sb.append(time).append(",").append(SSID).append(",").append(RSSI).append
(",").append(ampMax).append("\n");
return sb.toString();
}
}
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.dougan.rssitrack" >
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"
/>
<uses-permission android:name="android.permission.CHANGE_WIFI_STATE"
/>
<uses-permission
android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission
android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name=".MainActivity"
android:label="@string/app_name"
android:screenOrientation="portrait" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category
android:name="android.intent.category.LAUNCHER" />
60
</intent-filter>
</activity>
</application>
</manifest>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<!--android:id="@+id/">-->
<TextView
android:id="@+id/Connected"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/sIP"
/>
<TextView
android:id="@+id/IP"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/sSSID"
/>
<TextView
android:id="@+id/SSID"
61
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/sRSSI"
/>
<TextView
android:id="@+id/RSSI"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/sTime"
/>
<TextView
android:id="@+id/Time"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<Button
android:keepScreenOn="true"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/sButton"
android:id="@+id/btnStop"
android:layout_gravity="center_horizontal" />
</LinearLayout>
62
Appendix 2 – Test Results
Test One – Samsung:
118:01:22:085 BTHub3-RHT4
BTHub3-RHT4
118:01:24:071
BTHub3-RHT4
118:01:26:071
BTHub3-RHT4
118:01:28:073
BTHub3-RHT4
118:01:30:072
BTHub3-RHT4
118:01:32:072
BTHub3-RHT4
118:01:34:085
BTHub3-RHT4
118:01:36:075
BTHub3-RHT4
118:01:38:074
BTHub3-RHT4
118:01:40:073
BTHub3-RHT4
118:01:42:074
BTHub3-RHT4
118:01:44:075
BTHub3-RHT4
118:01:46:075
BTHub3-RHT4
118:01:48:076
BTHub3-RHT4
118:01:50:078
BTHub3-RHT4
118:01:52:077
BTHub3-RHT4
118:01:54:077
BTHub3-RHT4
118:01:56:077
BTHub3-RHT4
118:01:58:078
BTHub3-RHT4
118:02:00:082
BTHub3-RHT4
118:02:02:081
BTHub3-RHT4
118:02:04:081
BTHub3-RHT4
118:02:06:079
-41
-41
692
247
-39
951
-40
535
-40
61
-40
64
-41
64
-41
54
-41
59
-40
53
-40
53
-40
58
-40
86
-40
1573
-39
7301
-40
7301
-40
5492
-39
492
-42
190
-42
297
-44
53
-44
57
-42
57
63
BTHub3-RHT4
-41
121
BTHub3-RHT4
-41
121
BTHub3-RHT4
-41
159
BTHub3-RHT4
-41
220
BTHub3-RHT4
-41
621
BTHub3-RHT4
-41
534
BTHub3-RHT4
-39
188
BTHub3-RHT4
-39
84
BTHub3-RHT4
-38
80
BTHub3-RHT4
-38
123
BTHub3-RHT4
-38
97
BTHub3-RHT4
-38
384
BTHub3-RHT4
-38
6078
BTHub3-RHT4
-38
5867
BTHub3-RHT4
-39
238
BTHub3-RHT4
-38
3044
BTHub3-RHT4
-38
2047
BTHub3-RHT4
-38
67
BTHub3-RHT4
-38
63
BTHub3-RHT4
-38
61
BTHub3-RHT4
-38
74
BTHub3-RHT4
-41
66
BTHub3-RHT4
-41
63
BTHub3-RHT4
-41
74
BTHub3-RHT4
-41
74
118:02:08:080
118:02:10:080
118:02:12:081
118:02:14:083
118:02:16:084
118:02:18:083
118:02:20:083
118:02:22:085
118:02:24:084
118:02:26:083
118:02:28:085
118:02:30:086
118:02:32:087
118:02:34:086
118:02:36:084
118:02:38:090
118:02:40:087
118:02:42:088
118:02:44:087
118:02:46:085
118:02:48:085
118:02:50:090
118:02:52:089
118:02:54:090
64
118:02:56:089
BTHub3-RHT4
-41
60
-38
-38
33
27
-38
30
-39
65
-37
49
-37
34
-38
38
-36
36
-36
31
-39
28
-37
34
-37
39
-39
31
-38
3429
-38
1287
-38
2810
-40
531
-40
42
-38
58
-39
37
-39
58
118:02:58:090
Test One – HTC:
118:01:23:112 BTHub3-RHT4
BTHub3-RHT4
118:01:25:101
BTHub3-RHT4
118:01:27:098
BTHub3-RHT4
118:01:29:101
BTHub3-RHT4
118:01:31:102
BTHub3-RHT4
118:01:33:103
BTHub3-RHT4
118:01:35:104
BTHub3-RHT4
118:01:37:105
BTHub3-RHT4
118:01:39:105
BTHub3-RHT4
118:01:41:103
BTHub3-RHT4
118:01:43:107
BTHub3-RHT4
118:01:45:107
BTHub3-RHT4
118:01:47:109
BTHub3-RHT4
118:01:49:108
BTHub3-RHT4
118:01:51:110
BTHub3-RHT4
118:01:53:109
BTHub3-RHT4
118:01:55:106
BTHub3-RHT4
118:01:57:106
BTHub3-RHT4
118:01:59:111
BTHub3-RHT4
118:02:01:111
BTHub3-RHT4
118:02:03:109
65
BTHub3-RHT4
-37
44
BTHub3-RHT4
-37
34
BTHub3-RHT4
-38
44
BTHub3-RHT4
-38
6702
BTHub3-RHT4
-38
2770
BTHub3-RHT4
-37
6053
BTHub3-RHT4
-37
7838
BTHub3-RHT4
-37
6965
BTHub3-RHT4
-39
7502
BTHub3-RHT4
-36
6965
BTHub3-RHT4
-36
74
BTHub3-RHT4
-35
89
BTHub3-RHT4
-35
42
BTHub3-RHT4
-35
34
BTHub3-RHT4
-35
3545
BTHub3-RHT4
-37
3873
BTHub3-RHT4
-37
4642
BTHub3-RHT4
-34
3770
BTHub3-RHT4
-36
278
BTHub3-RHT4
-36
31
BTHub3-RHT4
-36
42
BTHub3-RHT4
-37
36
BTHub3-RHT4
-37
35
BTHub3-RHT4
-35
44
BTHub3-RHT4
-38
41
118:02:05:113
118:02:07:114
118:02:09:113
118:02:11:111
118:02:13:114
118:02:15:115
118:02:17:112
118:02:19:113
118:02:21:117
118:02:23:118
118:02:25:116
118:02:27:120
118:02:29:119
118:02:31:120
118:02:33:121
118:02:35:121
118:02:37:120
118:02:39:118
118:02:41:117
118:02:43:123
118:02:45:123
118:02:47:123
118:02:49:124
118:02:51:124
66
118:02:53:124
BTHub3-RHT4
-38
172
BTHub3-RHT4
-38
205
BTHub3-RHT4
-38
172
-42
-42
740
295
-38
153
-36
106
-36
99
-37
84
-43
181
-43
181
-48
107
-44
128
-44
128
-43
108
-43
1203
-43
3369
-43
11269
-43
11269
-43
2922
-43
1393
-43
104
118:02:55:122
118:02:57:127
118:02:59:126
Test Two – Samsung:
118:08:32:192 BTHub3-RHT4
BTHub3-RHT4
118:08:34:182
BTHub3-RHT4
118:08:36:183
BTHub3-RHT4
118:08:38:237
BTHub3-RHT4
118:08:40:214
BTHub3-RHT4
118:08:42:186
BTHub3-RHT4
118:08:44:184
BTHub3-RHT4
118:08:46:183
BTHub3-RHT4
118:08:48:184
BTHub3-RHT4
118:08:50:185
BTHub3-RHT4
118:08:52:187
BTHub3-RHT4
118:08:54:187
BTHub3-RHT4
118:08:56:188
BTHub3-RHT4
118:08:58:188
BTHub3-RHT4
118:09:00:186
BTHub3-RHT4
118:09:02:190
BTHub3-RHT4
118:09:04:189
BTHub3-RHT4
118:09:06:190
BTHub3-RHT4
118:09:08:190
67
BTHub3-RHT4
-43
132
BTHub3-RHT4
-43
184
BTHub3-RHT4
-41
120
BTHub3-RHT4
-41
109
BTHub3-RHT4
-37
108
BTHub3-RHT4
-38
108
BTHub3-RHT4
-38
104
BTHub3-RHT4
-37
104
BTHub3-RHT4
-37
93
BTHub3-RHT4
-37
1575
BTHub3-RHT4
-35
2224
BTHub3-RHT4
-36
2224
BTHub3-RHT4
-36
399
BTHub3-RHT4
-37
1159
BTHub3-RHT4
-37
837
BTHub3-RHT4
-37
84
BTHub3-RHT4
-37
87
BTHub3-RHT4
-38
96
BTHub3-RHT4
-38
96
BTHub3-RHT4
-37
131
BTHub3-RHT4
-37
108
BTHub3-RHT4
-37
116
BTHub3-RHT4
-37
137
BTHub3-RHT4
-37
99
BTHub3-RHT4
-37
226
118:09:10:190
118:09:12:191
118:09:14:191
118:09:16:192
118:09:18:192
118:09:20:192
118:09:22:192
118:09:24:193
118:09:26:191
118:09:28:193
118:09:30:192
118:09:32:192
118:09:34:192
118:09:36:192
118:09:38:192
118:09:40:193
118:09:42:193
118:09:44:193
118:09:46:193
118:09:48:197
118:09:50:196
118:09:52:197
118:09:54:195
118:09:56:196
68
118:09:58:197
BTHub3-RHT4
-38
497
BTHub3-RHT4
-37
2095
BTHub3-RHT4
-37
1212
BTHub3-RHT4
-37
619
-40
-41
45
35
-41
41
-40
56
-37
62
-37
43
-39
45
-40
34
-40
45
-39
42
-39
37
-39
39
-39
40
-40
7994
-40
8819
-39
7994
-39
1121
-39
8609
118:10:00:193
118:10:02:197
118:10:04:197
118:10:06:199
Test Two – HTC:
118:08:32:769 BTHub3-RHT4
BTHub3-RHT4
118:08:34:749
BTHub3-RHT4
118:08:36:750
BTHub3-RHT4
118:08:38:750
BTHub3-RHT4
118:08:40:750
BTHub3-RHT4
118:08:42:750
BTHub3-RHT4
118:08:44:751
BTHub3-RHT4
118:08:46:753
BTHub3-RHT4
118:08:48:762
BTHub3-RHT4
118:08:50:754
BTHub3-RHT4
118:08:52:754
BTHub3-RHT4
118:08:54:753
BTHub3-RHT4
118:08:56:753
BTHub3-RHT4
118:08:58:753
BTHub3-RHT4
118:09:00:753
BTHub3-RHT4
118:09:02:753
BTHub3-RHT4
118:09:04:754
BTHub3-RHT4
118:09:06:754
69
BTHub3-RHT4
-39
7045
BTHub3-RHT4
-39
8609
BTHub3-RHT4
-39
46
BTHub3-RHT4
-39
37
BTHub3-RHT4
-39
38
BTHub3-RHT4
-38
43
BTHub3-RHT4
-41
48
BTHub3-RHT4
-41
43
BTHub3-RHT4
-42
41
BTHub3-RHT4
-42
37
BTHub3-RHT4
-42
43
BTHub3-RHT4
-42
13343
BTHub3-RHT4
-42
11023
BTHub3-RHT4
-42
13343
BTHub3-RHT4
-42
11971
BTHub3-RHT4
-42
16694
BTHub3-RHT4
-42
13185
BTHub3-RHT4
-42
10063
BTHub3-RHT4
-42
34
BTHub3-RHT4
-42
35
BTHub3-RHT4
-43
38
BTHub3-RHT4
-41
35
BTHub3-RHT4
-41
36
BTHub3-RHT4
-38
43
BTHub3-RHT4
-37
38
118:09:08:754
118:09:10:757
118:09:12:755
118:09:14:758
118:09:16:758
118:09:18:759
118:09:20:758
118:09:22:757
118:09:24:758
118:09:26:762
118:09:28:761
118:09:30:762
118:09:32:763
118:09:34:763
118:09:36:761
118:09:38:760
118:09:40:761
118:09:42:764
118:09:44:766
118:09:46:764
118:09:48:764
118:09:50:765
118:09:52:766
118:09:54:766
70
118:09:56:763
BTHub3-RHT4
-37
43
BTHub3-RHT4
-39
8488
BTHub3-RHT4
-38
5869
BTHub3-RHT4
-38
8488
BTHub3-RHT4
-40
8782
118:09:58:764
118:10:00:768
118:10:02:770
118:10:04:768
118:10:06:770
Test Three – Samsung:
[119:56:52:236
119:56:54:231
119:56:56:232
119:56:58:256
119:57:00:231
119:57:02:231
119:57:04:231
119:57:06:231
119:57:08:232
119:57:10:287
119:57:12:234
119:57:14:233
119:57:16:235
119:57:18:235
119:57:20:237
119:57:22:237
119:57:24:235
119:57:26:902
119:57:28:236
119:57:30:237
119:57:32:237
119:57:34:239
119:57:36:242
119:57:38:238
119:57:40:241
119:57:42:243
119:57:44:239
119:57:46:242
119:57:48:243
119:57:50:243
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
-54
-47
-47
-46
-47
-47
-45
-45
-53
-53
-53
-51
-50
-50
-50
-50
-50
-51
-47
-47
-45
-46
-46
-47
-45
-45
-44
-44
-44
-45
749
455
374
171
50
50
47
2389
2185
274
47
46
49
2533
2350
57
1059
7031
7031
340
108
46
60
43
43
58
536
412
3922
3982
71
119:57:52:244
119:57:54:244
119:57:56:246
119:57:58:245
119:58:00:243
119:58:02:247
119:58:04:246
119:58:06:246
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
BTHub3-RHT4
-45
-45
-44
-44
-44
-44
-44
-44
356
356
48
212
994
904
154
122
Test Three – HTC:
119:56:52:294 BTHub3-RHT4
BTHub3-RHT4
119:56:54:283
BTHub3-RHT4
119:56:56:284
BTHub3-RHT4
119:56:58:283
BTHub3-RHT4
119:57:00:283
BTHub3-RHT4
119:57:02:283
BTHub3-RHT4
119:57:04:284
BTHub3-RHT4
119:57:06:285
BTHub3-RHT4
119:57:08:285
BTHub3-RHT4
119:57:10:285
BTHub3-RHT4
119:57:12:286
BTHub3-RHT4
119:57:14:286
BTHub3-RHT4
119:57:16:286
BTHub3-RHT4
119:57:18:288
BTHub3-RHT4
119:57:20:286
BTHub3-RHT4
119:57:22:286
BTHub3-RHT4
119:57:24:286
BTHub3-RHT4
119:57:26:286
BTHub3-RHT4
-57
-55
97
90
-55
59
-54
59
-50
53
-50
54
-49
39
-49
31
-49
121
-54
421
-54
165
-54
421
-53
30
-54
25
-54
37
-53
201
-54
5481
-54
4946
-55
5481
72
119:57:28:287
BTHub3-RHT4
-55
62
BTHub3-RHT4
-55
286
BTHub3-RHT4
-63
85
BTHub3-RHT4
-64
126
BTHub3-RHT4
-64
57
BTHub3-RHT4
-64
27
BTHub3-RHT4
-64
26
BTHub3-RHT4
-64
4141
BTHub3-RHT4
-62
2669
BTHub3-RHT4
-62
4141
BTHub3-RHT4
-62
35
BTHub3-RHT4
-62
1667
BTHub3-RHT4
-62
1669
BTHub3-RHT4
-51
754
BTHub3-RHT4
-51
1001
BTHub3-RHT4
-51
91
BTHub3-RHT4
-53
400
BTHub3-RHT4
-52
1602
BTHub3-RHT4
-52
881
119:57:30:287
119:57:32:288
119:57:34:287
119:57:36:287
119:57:38:288
119:57:40:288
119:57:42:288
119:57:44:288
119:57:46:354
119:57:48:288
119:57:50:289
119:57:52:289
119:57:54:288
119:57:56:289
119:57:58:289
119:58:00:291
119:58:02:288
119:58:04:288
119:58:06:315
Test Four – Samsung:
[115:06:14:988
115:06:16:961
115:06:18:960
115:06:20:961
115:06:22:960
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
-67
-67
-61
-60
-60
953
4723
6206
1497
881
73
115:06:24:960
115:06:26:962
115:06:28:960
115:06:30:960
115:06:32:961
115:06:34:961
115:06:36:961
115:06:38:963
115:06:40:963
115:06:42:963
115:06:44:963
115:06:46:963
115:06:48:965
115:06:50:964
115:06:52:966
115:06:54:965
115:06:56:968
115:06:58:967
115:07:00:969
115:07:02:967
115:07:04:970
115:07:06:972
115:07:08:972
115:07:10:971
115:07:12:972
115:07:14:972
115:07:16:973
115:07:18:973
115:07:20:977
115:07:22:976
115:07:24:973
115:07:26:976
115:07:28:977
115:07:30:975
115:07:32:975
115:07:34:982
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
-64
-62
-62
-65
-62
-62
-61
-61
-61
-61
-63
-63
-67
-67
-67
-64
-60
-60
-59
-59
-59
-62
-61
-61
-64
-67
-67
-63
-64
-64
-60
-61
-61
-63
-65
-65
1908
1908
3436
3206
1946
2744
2551
2551
3412
4042
1657
1058
648
2348
2348
1320
1084
2793
4902
1030
991
1406
1272
1712
1601
1601
5592
5295
1656
6829
6225
2075
4484
1964
1748
1748
Test Four – HTC:
[115:06:16:643
115:06:18:627
115:06:20:630
115:06:22:632
SKY9A698
SKY9A698
SKY9A698
SKY9A698
-65
-65
-63
-64
77
155
135
334
74
115:06:24:628
115:06:26:632
115:06:28:635
115:06:30:636
115:06:32:637
115:06:34:639
115:06:36:638
115:06:38:639
115:06:40:637
115:06:42:649
115:06:44:639
115:06:46:641
115:06:48:638
115:06:50:642
115:06:52:642
115:06:54:642
115:06:56:644
115:06:58:644
115:07:00:647
115:07:02:644
115:07:04:645
115:07:06:649
115:07:08:650
115:07:10:645
115:07:12:648
115:07:14:650
115:07:16:650
115:07:18:649
115:07:20:653
115:07:22:651
115:07:24:649
115:07:26:649
115:07:28:651
115:07:30:653
115:07:32:653
115:07:34:654
115:07:36:652
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
SKY9A698
-64
-59
-62
-62
-58
-57
-57
-56
-58
-58
-58
-58
-57
-58
-58
-62
-62
-62
-60
-61
-61
-63
-63
-63
-63
-61
-61
-63
-63
-63
-62
-61
-61
-62
-61
-61
-60
868
334
3378
2655
5517
1553
4910
321
1327
1125
1327
1358
7046
5601
6118
422
677
938
4030
6294
11500
6294
3315
3286
2285
452
904
907
1771
1589
1771
14332
10318
14332
3878
2991
3130
75
76
Download