>> Mike Sinclair: Aiden comes to us from Dublin... worked with Microsoft SenseCam, Lindsay Williams' SenseCam, and as I

advertisement
>> Mike Sinclair: Aiden comes to us from Dublin City University where he
worked with Microsoft SenseCam, Lindsay Williams' SenseCam, and as I
understand it, you were doing it to summarize into some sort of lifelog.
>> Aiden Doherty: Yes, summarizing more or less ->> Mike Sinclair: I don't know if that's part of your talk, but that ->>Aiden Doherty: It's a small bit of that at the end there is, yes, summarizing by
means of images basically.
>> Mike Sinclair: Okay. And so you've been working on a thing called
SmartLogger. And for those of you who are familiar with -- I can't think of her
name, Nuria Oliver's work, and I did her artwork a long time ago or a couple
years ago where we used an oximeter to -- and Bluetooth transmitter to log on an
early Smartphone, and I think you're kind of extending that into much more
useable and a much lighter version of the Smartphone.
So without further ado, Aiden.
>> Aiden Doherty: Thank you, Mike. Thanks very much. Hi, folks. So I'm over
here from Ireland myself.
>>: [inaudible].
>> Aiden Doherty: So thanks for coming here, first of all. I know you're all very
busy in the day there, so I'll try and keep it around 25, 30 minutes or something
like that there, and we'll have time for questions afterwards.
So as said, I'm part of this here group called CLARITY here in Dublin and Ireland,
so I'm a post-doctoral researcher here. And I've been here the last three months
as Mike has said. So, first of all, I'll just give a bit of a background to myself, then
to my mentor, Chris, and then we'll talk about just to ease us gently into what the
whole goal of this here project is and choosing this, for example, your cell phone
and to log physiologic activities.
So, first of all, the research group I come from, so we got something -- so we
recently got government funding of around 20 million dollars for over a five-year
period, and we've got around, when I counted there yesterday we got 85
researchers. And I know there's some people missing, so we've got probably
around 90, 100 researchers and then a support staff here as well.
An interesting thing about this group is we don't only have people working in
computing, like we got personalization and information retrieval, but we also got
things like material science, so chemists, sports scientists and also then people
working with say physiotherapy or sensor devices there. So we're trying to get
this idea of what we call as a sensor web then so that we've got lots of devices
around us, around us there now and they're as you say, they're sort of cheap and
they're robust and we're able to log lots of data now about ourselves
physiologically and also about the environment as well.
And what we're trying to do then is moving beyond our own research silos, so if
I'm a computer scientist, I don't only want to talk to other computer scientists, it's
good to talk to the material scientists or the sports scientists as well. So we've
sort of a more broad idea of what we like to do. And we're trying to get a couple
of demonstrator projects in at the end of this four year funded project. And one
will be like in personal health and wellness, so log in physiological data of many
different types. And also we want to be able to log the environment around us as
well, maybe to detect safer chemicals and water pollution and things like that, air,
and basically whole thing is what we try and -- or models to try and bridge the
digital physical divide.
So that's the background of me coming over here at first. And then to give -- so
to hone a bit more into this project there now, one of the most ubiquitous devices
around us is indeed the cell phone. So this is sort of a famous picture here that if
you can go after maybe some of the we say least developed countries in the
world and there's no running water sometimes in places, no electricity but a lot of
people do have cell phones, so they do, and indeed we compare the number of
cell phones to PCs is around a four to one ratio, so it is. So we want to try and
piggyback on top of the cell phone in terms of applications. And this is quite
interesting that around 70 percent in new cell phone subscriptions they are
coming from the developing nations in this world, so they are. So and as well
another interesting thing is that Bluetooth is now standard in quite a few of the
cell phones.
And indeed based on that, there was Microsoft we had an RPF there a small
while ago then where there was 14 universities funded there. And the whole idea
was the cell phone as a platform for healthcare, so we want to try and piggyback
on top of that ubiquity of the cell phone or to leverage that there and to try and
come up with technologies that can be integrated with a cell phone that it might
be benefit to the health and well-being of people in developing nations
specifically.
So for example, one application that we developed or that we displayed at the
TechFest a couple of months ago was getting microscopic imagines. This is a
lens that you can put on to the end of your cell phone and the normal camera in it
and then you can get for example pictures or images of say [inaudible] load cells
or something you might see someone might possibly have malaria, so you can
transfer the image up to perhaps a doctor that was maybe a hundred miles away
and get maybe quicker results back then.
And another example is here we for universities were researchers in St. Louis
University they developed an ultrasound probe then we can take ultrasound
images, say, of -- of anything basically and again it's -- you can do it remotely
with maybe a trained person at local village and can send it up to a trained
physician and somewhere farther away so this might be very helpful in
developing countries.
So them another interesting thing is we can set up here our lives from when
we're a child up to when we get older. We think of how many times do I visit the
doctor so how many time my physician get a feeling for how I am. And generally
it's quite a lot when we're younger, and then as we get to be into our teenage
years and 20s and 30s, we don't go to the doctor very much at all there. So
sometimes it can be hard for him to make a truly informed decision or harder to
make a truly informed decision regarding your health or well-being, so it is. And
the idea we like to do is introduce this concept called lifeloggging where we try
and log not just these snapshots in time but many, many more snapshots in time
that lots -- what you call very frequently during the day. So some of my own
research was in the SenseCam, and that was taking two to three thousand
pictures per day. So we can build a much more complete picture of your life and
lifestyle.
And it's less comfortable about lifelogs but trying to record as many aspects of
your life as we possibly can record digitally. And sometimes it's for a reason. So
it might be for some medical application, seeing what someone's lifestyle is like.
Sometimes it's just for family gatherings and so we can look back at it. Other
times we just do it because we can. We don't know what we want to do with it
yet, but it's an interesting technology and we're going to have fun with it.
And we consider lifelogging devices there. The main area I worked with mostly
was visual lifelogging domain and a lot of past research has been in trying to how
do we miniaturize these devices in order for a laptop [inaudible] and how can we
miniaturize these and make the battery life longer and more storage in them. It's
only recently really that we're trying to consider how do we retrieve all that
information or understand it better?
And so the aims of this project is first of all, to utilize the ubiquity of the cell phone
there so we can design some logger and taking many different sensors and read
the data from those. And we'd also like to review the physiological activities and
perhaps a more intuitive manner that it might be helpful for users. An interesting
thing is that a lot of when we're given say prescriptions for drugs or things like
that there, it's against sort of a general base orders but it will be a better idea of
what our own lifestyle is like and with lifelog data perhaps then we can begin to
individualize what our lifestyles are like and perhaps we can make better choices.
And I'm just going to take a quick drink of water here.
And I will first of all talk about this slope data logger. The idea is then we can
have these different sensors that can record your heart rate, your location, any
images that you might take with your cell phone or any other types of sensors.
For example, there's a body temperature one. So then say if we call this lady Jill
or something they got there, she has got her cell phone because we all got cell
phones around us, there's 4 billion of them lying around. What we like to do then
is build a logger that can incorporate all these sensors, store it somewhere on the
phone there first of all, so we can later upload it to a sample repository or data
store, and then eventually at another time then we can sit down and review this
data here. Then perhaps with their medical physician or some other person or
even just for ourselves for our own interest.
And the thing about this logger here is that we want to incorporate new sensors.
So Mike has been doing some work here on a neck cuff like this here that you
can wear around your neck, and the idea is that there's many different types of
sensors in it and so that a possible application of this is that we might want to
look at people when they are sleeping there. So what we want to do is have this
framework so we can incorporate sensors like that which might have maybe
some type of microphonic data or galvanic skin response or load oximetry
values. And also another sensor we might want to do is for example one like this
here where we can record your heart rate and it transmits via the small thing here
to Bluetooth as well, to your cell phone. So we want to -- so we've built here is a
framework that can include these different types of sensor values so they can be
incorporated along with these here other values. And for a new user or
programmer such as or sensor developer it should be quite easy for this
framework for them to design what we call a new class and then to incorporate it
into logger and then all the data is stored in XMLs. We're dealing with
incomplete and heterogenous sources of data, and we sort that by XMLs on the
cell phone there. So the logger we spent quite a small amount of time there
building that there. But that's I guess been done before, maybe one thing we
hope to eventually release is open source and it's pretty much ready to do that.
So that might be quite helpful to the community. But I guess this has been done
before more or less, so it has we try and incorporate these sources of data which
could be a bit novel.
And just to summarize this here it's for .NET and we incorporate these additional
sensors and it's to deal with these incomplete an heterogenous sources of data.
But perhaps the more interesting thing that we were interested in this project then
is how do we review this data or in an intuitive manner -- sorry, you've got a
question?
>>: Why around the neck?
>> Aiden Doherty: Why around the neck? Well, for the -- well, in this case what
we're trying to do is monitor a condition called sleep apnea. The thing about
around the neck is you can get voice so we can see if I'm sleeping or if I'm
breathing or snoring. Or not breathing as well. So you can get -- so that might
be a useful source of information so it's just -- yeah?
>>: [inaudible].
>> Aiden Doherty: So there's also oximetry and built into that as well. So there's
a small -- and we're going to make it -- in fact, we have a much better version of
already. So there's a small clip here you can put on your ear. And then I could
see your pulse oximetry and heart rate values as well. And as well we're going to
incorporate into that accelerometer values so you can see perhaps what pose
you're in when sleeping as well. So this is sort of a work in progress. It isn't
actually a mention of [inaudible] but later then too, so hold on.
>>: [inaudible].
>> Aiden Doherty: Yes. So the thing is that the sensor should transmit via
Bluetooth and then the cell phone and I can sniff for the Bluetooth device is
around so you would run five lines of code or something like that there. You
should be able to incorporate a new sensor type into that there.
>>: [inaudible] cell phone not being [inaudible].
>> Aiden Doherty: Correct. Yes. So software when there's mobile application
so it is or when there's mobile logger. Okay. Hope that's fine. Yeah?
>>: What type of sensors is GSR?
>> Aiden Doherty: It's galvanic skin response so it is -- I think, now correct me if
I'm wrong, that it looks at the level or analyzes your sweat, so it does, and then
you can tell and it can be somehow correlated, or people claim it can be, now I
haven't read this paper so closely, that it can be correlated then with emotions.
So maybe, for example, how angry you are or an intensity of emotion there. Now
so that's been claimed. I haven't read these papers very closely so that's the
claim so it is.
Okay. So a lot of past research I found, I know in myself in reviewing these data
we look at it as in our typical ER type screen of a heart rate and we just look at
the graphs there. But if I'm wondering why was it perhaps higher here,
sometimes it can be hard to remember that there, so what -- we'll go through just
a very brief and small simplistical view of human memory system. So how can
we sort of design a system that helps us so that we can consider these aspects
here to make more informed decisions or to exploit the human memory system in
a better way?
So we've got sensory memory first of all. Let's just only last couple hundred
milliseconds at most so it does and things we can process images in our brain
and things like that there. And we got the short-term memory, and that probably
only lasts around five, 10 seconds, no more than a minute I would say. And we
can only remember generally maybe five to nine items in that there. Things like
what will I do next, do I need to go and get a cup of tea or put sugar into my tea,
things like that there.
And there's the long-term memory. So that's split up into two different things.
We've got procedural memories. So that is things like learning skills, how to kick
a football or how to tie your shoes, things like that there, and that sort of memory
which is then split up into semantics, so I know Paris is the capital of France. I
know that my football team haven't won the big championship in 17 years and it
gnaws away at me every day of the week and as well we've got these
autobiographical memories and it's remembering things like my life, it's
remembering that on Saturday I was out at the outlet mall at North Bend or it's
remembering that I was away with Mike there last week, then we were into the
city center to meet some researchers and UW and medical school.
So and then we consider a bit more research underneath autobiographical
memories. So we've -- and one thing comes up here is that cued recall is better
than free recall. So I ask anyone what did you do on the 22nd of February,
you're stumped. But them if you give a cue or something and say that was the
time TechFest was at -- and you say oh, yes, that's right, and I remember going
around two or three o'clock and then you provide some images, you're providing
more cues, something I can remember better. Oh, yes, that's right, I was talking
to Tom at that time, so I was.
So that's providing -- so we provide these cues then, it's easier for us to provide
reasons or to remember what we were doing and some other things.
Memorandum wrist can be temporally encoded so we're pretty good at estimating
when things happened. We mightn't get it exactly right but we can remember
relative to other things when things happen. So I remember that I went back to
Ireland since TechFest so I can perhaps use these memory hooks of timelines.
And another thing is that some distinct memories are more strongly encoded so I
might remember that I went for a run on Tuesday and I was absolutely gasping
for breath because this is the first time I had ran in two weeks, so it was, and -- or
other things you might remember is that, for example, that the former president of
India was talking today there and I was watching that there seminar on the
webcast, so it's not a video, I'm not going to be a former government leader
giving a talk at Microsoft. So I might remember that better than the fact that I had
for my normal four sandwiches earlier on today or what I had for lunch because
that's a recurring event. And then as well some memories are stored by
association. So if I think on fish the next thing I think of is not football, so it is -we associate things not automatically but we remember things by association.
So if I see Mike, then I might think of some work we did with the SmartLogger
and might think of someone else. We operate associatively.
If we want people to review their physiological data, what we like is to effectively
review that area, first of all they log in the data on the cell phone that gives these
potential cues so then we can see when heart rates were high or what locations
you might have been at. And also then we want to do a query data temporally, a
highlight more distinctive events we use on charts so we can see when heart rate
was higher at times that might indicate a more distinctive event and then to
associate related events so to associate a heart rate with maybe location or with
some pictures around that there time so we can better understand why my heart
rate was high or low at a certain time.
So we came up with an interface like this here then which is sort of three different
panels and up to the top left here when we would have the query panel, so that's
where we can construct our queries so we can construct when things happen
temporally so query by dates and we can constrain it to different days or we can
compare data across days and then obviously we want to chart this physiological
activity here. In this case it's heart rate and we can see that more distinctive
things might stand out, for example when your heart rate is higher.
And finally and we want to provide a context that's around why is my heart rate
very high here. So we want to see a context around that so we can provide
some location or any information which might help us better remember a reason
why our heart rate was 150 might because I was out for a run and in this case
here we were going to show a heart rate is high. Why is that, because I was up
at the pro club there, so it provides a reason so it's not that my mentor Chris gave
me more work to do and I was having almost a heart attack because of that
there.
So provide some reasons. So that's a -- we can provide location context or we
can also provide some image context as well because there's quite a bit of
literature that supports the fact that images are very strong cues to remembering
things so there are I say sort of normal day, what was I doing here? Well, I was
in my offices normally sort of testing out this little neck sensor as well. So devise
another reason or helps us explain things better. And the date we want to be
able to some adaptability in this interface. So we want to see -- look for patterns
across different days.
So perhaps we sit down our physician so I can compare so what I say here is I
can group the graphs by day and I can see perhaps a patterns maybe around
four o'clock in the day or eight o'clock in the evening what might be happening
and you can imagine looking at this over a number of days then, too. And as well
we can look at different types of data sources as well, so we normalize them all
in the one scales we do. And so you can see them how you've spaced -- I think
that's space the number of steps here. Heart rate monitor, how it might relate to
heart rate for some reason or a lot of symmetry might relate to those there, too.
So this is just a case where we logged the data. But you can imagine them in
sleeping case we might want to see how symmetry might look in a graph like this
compared to breathing or something like that there or to heart rate then too. So
we want -- so we will be able to easily query the start then we can perhaps see
potential relationships. And indeed then sort of smaller things and you can delve
deeper into the data so you can get the timeline closer into look more deeply into
events just through normal zooming or scrolling of the timeline. And this we can
adapt the constraints of the data as well. So I just want to see what happens
perhaps on weekends. Do I sleep differently then or do I have some other
physiologic activity where I'm perhaps more active in the weekends or less active
in the weekends than I should be and we can query that data adaptively over
time then too.
And indeed said one thing that we're hoping to do there, it's in the pipeline list
and we perhaps make some open source releases here then too which might be
then helpful for first of all, sensor device researchers so they don't have to
concentrate. And how do I visualize that data or display that data in some way?
So now you can just plug in your sensor quite easily into this here framework and
then it will show up on the interface, we just show them which can easily
incorporate these new sensors.
And as well, another interesting thing might be people who are interested in the
machine learning then so they want to gather lots of data but they don't want to
spend time trying to configure devices and things like that there then, too, but it's
the database that we have and do their own machine learning algorithms on that
then to identify patterns. And maybe as well some health conscious individuals
might be just interested in logging my heart rate data where sometime able to
query them in the different way to perhaps some applications that are already out
there.
And I'm going to show then some ongoing work here now in the last five minutes
or so. So one use case of that is that this application in the sleep apnea here.
So Mike, he's been mostly driving this, so he has, but we're working along with
him. So he's developing this sensor here that we're just talking about these
different sensors and perhaps the pose or the breathing or symmetry values and
things like that there.
And we're in some preliminary discussions here with doctors in the Sleep
Disorders Center in UW here as well just to see if there's anything potentially be
interesting to them, it's good to get their feedback, their tune. It sort of comes
back to as I said if the very start talking about sometimes we don't want to be
stalking our own research silos, we want to collaborate across multiple
disciplines as well. So it's good to collaborate with these people who very
different perspective to ourselves as well. And indeed said sleep apnea is
something that 12 million persons of. Try to diagnose at times you might have to
go to a special sleep lab and it can be hard to see how a person truly sleeps in
an unusual environment like that there, so imagine we can monitor people in their
own homes over a month or something like that, where we can build up a much
more complete picture and that's a whole lifelog envision then so we make better
informed decisions.
And as well there's a couple of future challenges. So then this project was
displayed at TechFest and it was very helpful to us because then we got to see
potential work that could be done. So there's Kristin Lauter here in the internal
research department, and she has got a cryptography group. And what she's
interested in looking at here then is the security of the static, because it's your
medical data, your physiological data, so it's intensely private to you and you
don't want to release it to other people.
So we got to consider these privacy concerns and we got to consider both ways
as well so not just from the device being uploaded from the phone but also from
the phone so that some other device can or some other phone can't control your
device in a certain way. So I think it was recently that with pacemakers and
hearts that there's a certain signal that you could send to those to reset them or
turn them off, so obviously that's pretty important security so you can't turn off
somebody else's pacemaker which is pretty severe so it is. And as well in
uploading that from the cell phone to your central repository as well you want that
data to be save and except secure. So the cryptography group in here, they're
looking at that longer term.
And another interesting thing is where do we store the data? At the moment we
just sort of SQL database but longer term we would like to store it somewhere
like the health vaults in the health solution groups that they're doing, so we're just
having some interactions of those guys at the minute there, too, to see longer
term how we can integrate our own promise and so that the data's stored in
health vault then. So they're interested in the privacy of the data and it's only the
individual then to release their data to who they want to release it to and things
like that here.
And another thing sort of farther down the line that we're just having very
preliminary discussions with then is the people who are interested in machine
learning might be interested in this here. So that we give some
recommendations then to people based on -- or looking at the pattern so perhaps
we had a certain pattern here just a screen-shot image but imagine it was
meaningful data that perhaps then we could see over a while that you might be
shown some [inaudible] of sleep apnea or even farther into the future that could
be maybe [inaudible] for multiple types of conditions that perhaps you might want
to go then and visit your medical physician to get checked out more closely.
And indeed this will sort of tie into work we've done ourselves here in Dublin on
the SenseCam. The SenseCam is this camera you wear around your neck.
That's very small, lightweight, and takes two to three thousand pictures per day
which is around a million a year. And one thing we're doing as we've manually
will annotate around 95,000 images and across these what we call 27 concepts
with good things like sky, grass, buildings, working on PCs or some are what am
I eating. And we've trained classifiers on those there. And a number of those
are for reasonable accuracy here as well. And the interesting thing about that is
that we can begin then to compare lifestyles across social groups here.
So here we have five users. And we can then begin to compare -- so this black
line is the medium of what you would expect the social group here to do, so we
can see that user two is the home [inaudible] so and so in the group because he
eats more than people, or we can see that user three, he sees more external
views of vehicles. What was that? Well, he cycles into work every day, so he's
seeing all these external views of cars and busses. User one drives a lot more
than the rest of us, and the reason that's -- he's the only guy that does drive out
of us. User four, he's the very diligent researcher. He reads more than the rest
of the people there, and anybody knows that's been in our research group that
knows that [inaudible] in our group, he works much harder than the average
person so he would read much more than us.
So we can begin to see things like that. You can imagine then perhaps having
some little Facebook application or something they got there, like who's been
more active this week out of our small little social group or who's working too
hard and becoming a boring so and so.
And one last thing then I guess to consider is the fact that the minute we're
transmitting by Bluetooth there and it's quite power intensive so an interesting
thing there to contractor is in future will be zibgee sort of that's the lower power
where there's transmission protocol, but I guess that's probably a bit farther off in
the future because the minute Bluetooth is the mainstream in cell phones but we
hope perhaps in future it might be something like zigbee means that we can run
loggers for longer and they become more common as well because a lot of
people don't like to run loggers and rightly so because it drains the battery in their
phone. But if it's lower power, then we can begin to gather more data which
means we'd make better decisions.
So in conclusion last slide here, so we're trying to utilize the ubiquity of the cell
phone in building this logger to make it -- and we've tried to build this framework
then where it should be quite easy for people incorporate new sensors and that.
And we'll try and make an open source release of that or it's in the pipeline lease.
We've planned that.
And to review these physiological values, we're trying to make it easy or intuitive
for people to do that there. As I said, there's lots of exciting things to do not
future. There's I think this [inaudible] it has some legs to run there yet. And so
thank you very much folks for your attention. I appreciate it very much. Thank
you.
[applause].
>>: So it seems to me that they really -- that these sort of two elements of the
work are really [inaudible] in a way, aren't they? I mean they're sort of reviewing
the physiological data. I mean, that's really just telemetry trying to come up with
[inaudible].
>> Aiden Doherty: That's right. And divide the contextual information around it
then to help, yeah, remember better. Yeah. Yeah. And that relies on the first
one and obviously to go and gather the data too, yeah.
>>: Yeah. So I guess I'm wondering whether there's an interest in [inaudible]
obviously interested in looking at different approaches.
>> Aiden Doherty: Yeah.
>>: But more continuous data by, for example, [inaudible].
>> Aiden Doherty: Yeah.
>>: [inaudible] gathering data and [inaudible].
>> Aiden Doherty: Yeah. We would -- yeah, love people who have those and
say transmit via Bluetooth to actually try it out on this framework then too and
then to see how it logs when you graph it and stuff like that there. So the idea is
that we would be trying to encourage people who have those sensors to get their
hands dirty then and to -- and to try and -- and more or less stress test if I want to
see how it actually does work, yeah. So that's sort of a goal of the thing that
we're trying to encourage people and with those sensors to use this. Yeah.
>>: [inaudible] are actually looking at Bluetooth and have you looked at how
[inaudible] that's happening? And number two, what exactly do you think the
[inaudible] what looks [inaudible].
>> Aiden Doherty: Okay. So first part of the question as to how prevalent
Bluetooth devices are, now, I don't have any statistics on that, so like we
purchased this here device from I think it was New Zealand so it was to monitor
the heart rate data so just a standard one like this. And I guess the advantage of
that over your say your Polar heart rate monitor is now I don't have to wear so
much and I have two or three monitors I don't have to wear a lot of jewelry or
watches here on my hand for these different monitors. So I think they are
coming more in the Bluetooth but it probably will be a small while yet because I
guess a lot of manufacturers they want to consolidate their place as well so
people have to use their own software device to upload the data.
And we are [inaudible] how much we want to make open source that there. The
plan is all of it.
>>: [inaudible].
>> Aiden Doherty: So pieces -- well, I guess the like sort of two main pieces, one
the actual logger for the cell phone and then the other then is the actual all the
database and the interface application and to review that data and, you know,
we've tried to make it in such a way that it should be easily extendible or
research code is easily as extendible as we can make it. Yeah. So that's the
plan at the minute. Yes?
>>: Are you going to get all the caregivers to assimilate this information?
Obviously they're not [inaudible] this BDB of raw data, that's what I've been doing
the last couple of weeks. How do you get them to assimilate that? [inaudible].
>> Aiden Doherty: Yeah, that's a good question. So I guess we would hope that
their interface helps it easier to drill down to a more macrolevel at the minute
there, but I guess then it requires sort of the -- one of those ongoing stages of the
project then for some development of that [inaudible] with particularly the
machine group then to -- so I say right now that it's just you review the interface
that we sh owed at TechFest there and probably it is easier than giving them the
DVD and an Excel spreadsheet at the minute. But I guess here we're -- it's
probably another year or two to apply some machine learning algorithms and
[inaudible] so we can again to recognize some patterns that might make it easier
then for them.
>>: In terms of making them available [inaudible].
>> Aiden Doherty: Yeah.
>>: So there's a platform when you talk to two physicians, but you still want to
have some filtering back. And there's a Bluetooth in your ability to do what's in
your time is constrained on the cell phone and how long do cell phones run when
you're talking Bluetooth all the time? I mean, I know when I had my headphone
on, I'm driving and what not, my battery goes down much faster if I spend time
with the [inaudible].
>> Aiden Doherty: That's actually a very good question. So most days data I
probably only log three, four, five hours at most I would say every day so when I
was in work and I would notice that the battery is running down quite fast as well
on that to even for those small amounts of time because their SenseCam would
be like 16 hours a day so we were -- so then when something becomes where
and like that, you get about it, obviously, and people are going to log much more
data and that's what we get people with mailings and images and things like that
there, too.
So the thing then we did with the Bluetooth at cell phone then was to log the data
less often so I like to be -- profile would be once every minute instead of once
every second or something like that there, and I can increase the time. But I
guess the thing I'll have to think about in the future then is how fine or coarse
grain do we want the sampling to be of the data. So I don't have an answer to
that at the moment.
>>: There are a lot more efficient [inaudible] at different levels so I think you can
sample and upload and do whatever. If zigbee or some lower power version of it
is the target, is Bluetooth going for lower power as well? Everybody is looking at
the [inaudible].
[brief talking over].
>>: You could probably spend one-hundredth of the power just putting the data
to a local SD card, right, no processing at all [inaudible].
>>: So there are lots of other ways. But there is a faction of Bluetooth standard
that is looking at lower power, but most of them are looking at higher bit rates
and ->>: But then you know, the EST card or whatever, if you're on the phone, that's
great, but if you're talking about inside the device bringing the cache and them
[inaudible] I want my device [inaudible].
>> Aiden Doherty: Yeah. So [inaudible] tan or something like that, yeah, we
have really low power and it's passive logger in the background running on it.
>>: [inaudible] right?
>>: [inaudible] go over the Bluetooth [inaudible].
>> Aiden Doherty: Just refine that question a little bit more then?
>>: I'm wondering which Bluetooth profile [inaudible].
>> Aiden Doherty: So don't buy a serial port on the phone so the person
registers the serial port and then -- and you're -- so then the code then you would
just specify what datas and then there's around two or three other small lines to
change the code and that should be the theory [inaudible] heard about Bluetooth.
>> Aiden Doherty: No.
>>: Okay. So that's a [inaudible].
>> Aiden Doherty: Okay.
>>: There's an organization called the continual lines.
>> Aiden Doherty: Okay.
>>: It's like a -- it's [inaudible] a bunch of member companies and they're really -they point to certain standards. They have like a stack that they provide their
guidelines, they provide [inaudible].
>> Aiden Doherty: Okay.
>>: They have Bluetooth which ->> Aiden Doherty: Okay. I'm going to ask you to forward that link on to me if
that's okay.
>>: Sure.
>> Aiden Doherty: Okay. Thank you very much.
>>: What [inaudible].
>>: How much profile I think is [inaudible] high powered Bluetooth but the
continual lines itself is trying to figure out the more [inaudible] power guidelines
for the [inaudible] so I think they're going to be using Bluetooth. That's what I
hear. We're not [inaudible] we're not members of the continual line, so we don't
really have [inaudible] into the Bluetooth yet but word is that the probably
[inaudible] Bluetooth and I think that they work [inaudible].
>> Aiden Doherty: Okay. Thank you. Yes?
>>: [inaudible] on top of the [inaudible].
>> Aiden Doherty: Built from scratch so it will also SenseCam has like a ->>: So why not -- why not build on SenseCam?
>> Aiden Doherty: Well, I guess first of all in terms of the data log in, so it's
completely different platforms that they are on, so log in data that's too
completely different things then with regards to viewing the data then I guess
there's very different characteristics of the SenseCams so the images there are
very strong cues to information so they are and with the SenseCam as of yet
we've only briefly looked at log in heart rate monitoring or physiological activities.
And we had an interface somewhat actually similar to the one that I showed
there, but most of the SenseCam work then was just purely image processing
and processing on the images and to show the images and we thought it wasn't
so helpful and to say accelerometer data or something like that was the image
data seemed to give the stronger cues with regards to what I was actually doing
at a certain time.
>>: [inaudible] this one, yeah.
>> Aiden Doherty: All right. So this is our SenseCam website and that's where
all our publications are on the SenseCam. So it's quite interesting that the
SenseCam there was briefly mentioned in Science Magazine and I think a week
or two before that in Time Magazine as well, so it's a good publicity center in a
way, the SenseCam because it's on some studies that FMRI studies that people
with memory deficiencies and the SenseCam seems to be quite helpful for
helping them remember things that they were doing perhaps three or four days
ago that otherwise they might not remember. So there's that researchers in
Edinburgh Hospital in Cambridge in the UK that are looking at research on the
actual patient themselves and what in our work what comes into play is how do
you manage that collection of a million images a year or something like that there
and how do you summarize that information so a lot of that will be described on
the website here that we have and that's what my PhD topic is on there, too.
That's what most of our work is in. When I go back to Dublin on Monday morning
that's what I'll be thinking about again.
>>: [inaudible].
>> Aiden Doherty: Sorry?
>>: [inaudible] just making sure I [inaudible].
>> Aiden Doherty: Okay. Thanks again for your attention, folks, I appreciate you
taking time out of your days. Thank you very much.
[applause]
Download