>> Desney Tan: Gabe Cohn doesn't really require that... around for a while hanging out with us. I still...

advertisement
>> Desney Tan: Gabe Cohn doesn't really require that much introduction. He's been
around for a while hanging out with us. I still remember first meeting with Gabe, young,
super-start student out of Cal Tech coming out specialized in embedded systems, VLSI,
computer architecture. Over the years it's been great hanging out with Gabe, watching
him take his skill set and grow into an amazing ubicomp researcher. First internship:
Gabe's pushed the boundaries in various ways. I used to have this standing challenge
for all my interns to try and break as much as they can in the first week here. Until Gabe
came in, Scott Saponas now with the group held the record for taking apart our forty
thousand-dollar EEG device in his first week and I walked in and he couldn't put it back
together. First weekend of Gabe's internship he called me and he goes, "Hey, I think I
broke your house." So I stopped doing that challenge; it didn't seem like it was easy to
top.
The first internship Gabe set the bar really high and ended up with three papers, two
best paper awards, on best paper nomination which made it really rough on our
subsequent interns. If you look at his portfolio: NSF Graduate Fellowship and MSR
Graduate Fellowship, multiple, multiple best papers, start-up company that's gotten
funding in a recent Series A round that's out on market and it looks like he's got
prototypes of. Rather than stand in the way, let me hand the mic over to Gabe.
>> Gabriel Cohn: All right. Thanks a lot, Desney. So I'm going to talk about much of my
work on building embedded sensor systems to bring ubicomp to life. And since a lot of
this work was done here at MSR, you might recognize some of these things. So first
what is ubicomp? So you ubicomp, or ubiquitous computing, is this idea that was started
in the late eighties to early nineties by Mark Weiser and his colleagues at Xerox Park.
And the idea was that instead of having users interact with a single personal computer,
they would instead interact with multiple computing devices simultaneously as they go
about their everyday lives. And these computing device would not only be sitting on your
desktop as shown here, they would also be worn on the body in the case of wearable
computing or mobile computing or off the body and seamlessly embedded into the user's
home and work environments.
And in order to enable these kinds of ubicomp applications requires building these
embedded sensor systems. Now embedded sensor systems can generally be broken
down into four main parts. There's the human interface, the sensing, the computation
and the communications. And developing these kinds of systems actually requires
expertise in nearly every subdomain of computer science and electrical engineering.
Luckily in the last 25 years since Weiser had this vision of ubicomp we've actually seen
exponential growth and improvement in nearly every single one of these subdomains.
And as a result of this progress we've actually seen our first major victory in ubicomp
and that is mobile computing truly is ubiquitous today. So from our smart phones and
tablets and laptops, we've really become accustomed to having information always
available at our fingertips. And, although, this is a huge success for ubicomp we still
have not yet realized many of the original dreams of wearable computing and enabling
smart environments.
However, this is still a really exciting time because we are just beginning to see what
these kinds of applications might look like. But in order to bring many of these ubicomp
applications to life we need to continue to do research, not only in each of these
subdomains of computer science and electrical engineering but research that cross
these traditional boundaries between the human interface, sensing, computation and
communications. And this multidisciplinary research is required in order to reduce the
adoption barriers that are currently limiting widespread use. And the problem that you
can see here is that sensing is still very invasive, both in terms of the number of sensors
required on the body and off the body as well as the way in which these sensors are
used. And often the problem is actually power consumption.
So the high power consumption of wireless sensor systems typically limits their battery
life and, therefore, their usefulness. My work is focused on trying to make sensing more
noninvasive by doing tightly integrated hardware-software development with a focus on
ultra low power indirect sensing. More generally I look at identifying new opportunities for
sensing, building new embedded sensor systems and then finding new applications, new
application domains for this sensing. And I do this work for both on-body and off-body
sensing. This slide gives an overview of most of the work I've done in grad school, and in
this talk I'll only talk about a subset of this so I won't bore you with everything I've done.
So I'll talk about a little bit of work that I did here actually on on-body sensing to enable
low power human computer interaction by leveraging the conductive properties of the
body. And then, I'll talk about some off-body sensing where instead of leveraging the
conductive properties of the body I'll actually leverage the existing infrastructure. And
again, to reduce the number of sensors required or reduce power consumption.
So let me first talk about this on-body work and so the focus of this work is actually to
enable human-computer interaction. As we move to more ubiquitous interfaces, we have
to move away from the traditional mouse and keyboard paradigm of interacting with
computers and find more natural ways of doing these kinds of interactions. And so one
thing that's become very popular is to use whole body gestures. So this is something
that we're all very familiar with using computer vision and depth cameras like the
Microsoft Kinect. Unfortunately these camera-based systems actually have some draw
backs. One is occlusion. They can only see directly what's in front of them, line of sight,
so if you have a second user they might actually block the line of sight from the camera
to the other user. In addition to occlusion there are big problems with lighting. So anyone
who's been upstairs to Andy and [inaudible] lab sees that they all work in the dark. And
the reason is, is because lighting actually is a problem for these camera-based systems.
This is actually from the Kinect support page. And you see things like, "Make sure the
room is brightly lit and avoid direct sunlight."
But I see actually the biggest issue with camera-based systems is their limited field of
view. So if you want to interact using whole body gestures in your living room, you can
do that. You can use a Kinect and that works great. But if you want to do this kind of
interaction everywhere in your house, you'd have instrument your entire house with
cameras. And of course this is difficult in terms of the insulation and maintenance but it's
also not practical to put cameras everywhere in your house. So this is still a really
interesting way of doing whole body, free space gestures, so I want to do this kind of
interaction as well. I don't want to use a camera because I don't want to be limited to
doing this just in the living room; I want to be able to interact with my computer
anywhere. So I could interact in the kitchen, in the dining room or in the bedroom.
And so I'm going to talk about a project called Humantenna which allows you to do just
this. It senses whole body, free space gestures in real-time but it does it, instead of using
a direct sensing approach or use a camera, it uses in indirect sensing approach where
we actually use the human body as an antenna. And I'll describe what that means in just
a minute. But the real advantage of this is that there's no instrumentation to the
environment and only minimal instrumentation on the user's body. So you can think
about it as Kinect-like gestures without the Kinect. Now I talked about using the human
body as an antenna, so let me describe a little bit about antennas.
So you might recognize this; this is a typical TV antenna. It's called a Bunny Ears
antenna because of the way it looks, but it's not the only kind of antenna. This, in
addition to being my brother, is also an antenna. This is a human body antenna. It's
called a teenager, again, because of the way it looks. But it actually wasn't designed to
operate at a fixed frequency like the TV antenna. It's really just a dielectric with a
complex geometry which works fairly well as an antenna between about 40 hertz and
400 megahertz. And this is nothing new. This is actually known as the body antenna
effect, and it's actually a source of noise and interference in body area networks and
anyone who's looked at analyzing signals on the body so ECG, EMG like a lot of the
people here have done.
And so in the product I'm actually going to use this noise or this interference as my
signal. So let me show you how this works. Here's [inaudible] in cartoon form. This is, I
think, the only embarrassing picture of anyone I have in this room in this slide deck I just
realized. And so here's our human body antenna, and in this project we're going to look
at frequencies below about 200 kilohertz. And in this band the signals that the body
picks up are the electromagnetic noise radiating off the power lines and appliances in
the home. And so we can actually sense this very simply just by measuring the voltage
on the body and then applying some signal processing and machine learning in order to
enable this gesture interface. So what does this signal look like?
Well, if you take a look at it in the time domain, you see basically a distorted sine wave
at 60 hertz. This makes sense; 60 hertz is the frequency that power is delivered to all of
our appliances. So this is that noise that's radiating off the power lines. The amplitude of
this signal actually varies as a function of the user's proximity to these noise sources, to
the power lines and appliances. If you look at this in the frequency domain, you again
see this 60 hertz peak and then you see all the harmonics of 60 hertz. Now the harmonic
amplitudes vary as a function of the user's posture and this is because as the user
changes their posture, they're changing their antenna properties. Essentially the transfer
function of that antenna is changing. So using a machine learning approach where we
pull out some of these time and frequency domain features, we can actually train the
system with a user and a number of different postures and then classify which postures
they're in.
If we look at higher frequencies, up to about 200 kilohertz we notice there're these
peaks. And these peaks are the noise generated by the switch mode power supplies, all
these power bricks that we have plugged in all over the place. And again the amplitude
of these peaks is a function of the user's proximity to those noise sources, to the
appliances that generate them. So again using a machine learning approach and taking
these features as well we can also determine the user's location. So this is really
exciting. With no instrumentation to the environment, just looking at the voltage on the
body and applying some signal processing and machine learning, we can determine the
user's posture and location. Of course I promise more than just posture and location; I
said we can do whole body gestures. By gestures I mean the user is actually moving not
just standing still. So let's take a look at what the signal looks like when the user moves.
So you get this voltage waveform like this. The first thing we want to do is actually
segment when the user is actually moving.
So to do this we'll take a low pass filter at about 10 hertz. And we'll notice something
really interesting. If you look at this green curve, you see that when the user is not
moving the line is pretty stable around 0 volts. When the user moves it deviates away
from 0 volts. For now I'm going to ignore why that's happening; I'll come back to that in a
few minutes. But we're going to use this for segmentation. So we can simply set a
threshold on this green curve and figure out when the user is moving. The next step is
we want to do some feature extraction to determine – pull out some features to use for
classification. So we'll take a high pass filter at 40 hertz, and here we can see the AC
amplitude of this voltage signal on the body. And you can see that during the gesture
this amplitude changes, so in order to capture the dynamic nature of the gesture we'll
actually divide the segmented gesture into seven feature windows. And over each
window we'll use a number of time and frequency domain features and then use all
these features in a support vector machine classify.
And this will allow us to classify between a number of pre-trained gestures. So in the
evaluation I'm about to show which many of you went around homes and did many
times, we used 12 pre-trained gestures. This involves things like waving your arms,
moving your body from side to side and then a series of punches and kicks because,
you know, we're comparing with Kinect so we have to do a bunch of punches and kicks.
We did an evaluation with 8 participants in 8 homes and found this actually works with
about 93 percent accuracy. Again, this is really exciting. There is no instrumentation in
the environment, just looking at the voltage on the body using signal processing and
machine learning, we can determine what kind of gesture the user is doing with 93
percent accuracy. Yeah?
>>: Do you have to train per person [inaudible]?
>> Gabriel Cohn: You have to train per person per location to get this level of accuracy.
You could potentially make a model that works regardless of person. We only had 8
people and so we definitely got much better than chance with that, but with 8 people we
can't really answer the question of whether it will work in general.
>>: Does it drift over time?
>> Gabriel Cohn: So because we're using the noise in the environment, it's definitely
going to change because as the noise changes over time. If you change the setting of
the lights or turn on a new appliance then the noise around will change. And so, yes,
how well it works will change over time. Depending the features you choose, you could
actually choose features that are more or less sensitive to that. Yeah?
>>: Could you attempt to correlate the performance envelope for the gestures so people
are inconsistent?
>> Gabriel Cohn: We didn't in here but that's actually kind of an interesting thing to do.
And it depends on the gestures and how they're confused. What's actually interesting is
what's confused most is not, kind of slight variation to the same gesture but things that
are symmetric but completely opposite so moving your right hand versus your left hand.
And the reason for this is we can really only tell that apart if something about the noise is
different on one side versus the other. So that's actually more of the issue. But it totally
depends on the noise environment. Yeah?
>>: So if I go to an environment that's trained by other people, if I got my model for my
house, say, and AJ has her model in her house and I happen to go there, is anything
reusable from those two sets and two sets of features?
>> Gabriel Cohn: So what I think – And this is kind of just speculation because we
haven't done that exact test. But I would say that if AJ has her trained set and you go to
her house, you could use her model and it won't work as accurately as if you had trained
it yourself. But you could maybe use that as a starting point and then bootstrap to kind of
build a personalized model. So I think you can definitely have location-based models
that are people agnostic. There will be some difference on different people particularly if
they're different sizes. So it might work fairly well because you and AJ are about the
same size, but AJ and AJ's kids might not work nearly as well. Yeah?
>>: Have you observed [inaudible] with how your body is hydrated?
>> Gabriel Cohn: I think somebody is looking ahead in the slides. We didn't control for
how people were hydrated. I wouldn't expect that to change much. I would actually
expect the humidity in the room to matter more than how well hydrated you are mostly
because the frequencies we're looking at, the body is pretty much a conductor at 60
hertz or so. All right. Any other questions? So let me show you a real-time
implementation of this. So here's a video. I'm going to perform a gesture and the TV will
then show you which gesture I performed. So unlike Kinect, it's not building a skeletal
model in real-time. There are actually 12 pre-recorded stick figure videos in here. It's
going through the whole process that I told you and determining which of these 12
gestures I'm performing. So I went to this location, did each gesture once as training and
then filmed this video. So as you can see there's no instrumentation on the environment
and just this simple training.
Now one of the interesting things about this is we can get away from some of the
problems that existing camera systems have. So, for example, I can turn around, as I'm
about to do, perform the same gesture and it will still work because we don't actually
have this occlusion problem. Yeah?
>>: So I see that the purse you're wearing contains the sensors that on you. And you
talked about how battery power was one of the problems with sensing. I don't see any
wires so I assume that that's battery powered. So does this have that same...
>>: There's a wire on the shoulder.
>>: ...challenge?
>> Gabriel Cohn: So I've never heard someone call it a purse. [Laughter] Let me explain
what's in the shoulder bag. [Laughter] Actually someone mentioned there's a wire. So
there's actually a wire going to the back of my neck. I need contact with the body. Here
I'm making contact with the back of the neck. It doesn't need to be the back of the neck.
We really wanted a location that isn't going to move around as I do gestures; there're no
compounds. But it could be on your wrist as well or anywhere else. In the bag is
basically just an analog digital converter so digitizing this signal that's on the body and
then, there's a wireless transmitter sending it off to our server. It doesn't need to be a
large bag. You could simply reduce this into a small wrist watch form factor. Power
consumption could be a problem depending on how quickly you're sampling and if you're
sending a wireless signal. So in this project I wasn't concerned with power consumption.
I'm about to show you on the next slide what you can do with power consumption,
though. Yeah?
>>: How small can you go? Not in terms of the device size but in terms of the gesture.
These are all really large body movements. Can you detect a wave gesture?
>> Gabriel Cohn: So I think the real issue is basically the noise in the environment. So
you could train the system with like small finger movements, and that'll probably work for
some time. But as the noise changes, that will stop working. And so it's really a question
of what's the variability in the noise and not necessarily how fine-grained the gestures
can be if that makes sense. So, yes, I think you can train a classifier that will work with
some very small gestures, but it won't work very long. If you want it to work robustly over
time, you need to use larger gestures. And actually while we have this I think an
interesting way of making this actually realizable is you combine this with a vision-based
system where you actually stand in front of Kinect and use that as your training. You
perform some actions in front of Kinect, you train the system and then you can walk
away and the system can keep working perhaps.
Now because there's drift over time you might to have periodically walk in front of your
Kinect again to update the model. But I can imagine something like that working. But I do
consider it practically to be more – I think you could do waves. We actually had waves in
our gesture set but I don't know if you can do like finger move motion. Yeah?
>>: Is there an ideal noise that you would consider just plugging into the wall?
>> Gabriel Cohn: Exactly. So one of the goals here was what can we do with ambient
noise? But if you really want to make this work and you're willing to put one thing on your
wall, what would that noise source look like? And we haven't really explored that but you
can definitely do a lot better if you control the noise. And one of the things you can do is
actually get around some of the issues that we have right now with what happens with
someone turns on a light. Well, if you're making your noise then you don't have to worry
about that. So that's an interesting thing to explore. Yeah?
>>: Does it matter if we're on carpet as opposed to some other surface?
>> Gabriel Cohn: So it does matter what your grounding is and this matters -- you know,
what shoes you're wearing and to some degree what the surface you're standing on is.
So, yeah, that does matter because it's going to affect the amplitude of the voltages on
your body, for example. Yeah?
>>: Also since you've mentioned power several times, have you done any work on how
you convert human movement or energy then to stored power? Since it's all about
movement.
>> Gabriel Cohn: So I haven't done any of that work. There's actually been a lot of that
work in the community looking at how do you harvest energy from the motion of people.
And it, obviously, depends on where on the body you want to harvest this. So the feet
are actually really good. There's actually a lot of energy that can be harvested from your
feet. Other things don't actually move that much it turns out. People are learning this
when they have FitBits on now; they actually sit around most the day unfortunately. All
right, so let me move on. So the one thing I skipped was I said when the user moves
there's this voltage that deviates away from zero. And I said I'll ignore why that's
happening. But let me actually about why that's happening.
It's actually not obvious, and I spent about six to eight months trying to figure out what
was going on there. And it turns out that this is caused by the static electric field between
your body and the environment. And this voltage happens as you move and change the
static electric field. So it turns out this is useful for more than just segmentation for
Humantenna and it actually spawned a whole other research project around static
electric field sensing. And the real advantage of this is that it's ultra-low power and
enables whole body motion sensing. So before I talk about what it means to do static
electric field sensing, let me talk about traditional electric field or capacitive sensing. So
this is something that's been done for many years in the HDI community, and the idea is
you actively produce a time-varying electric field and then sense how that field changes
as a function of user activity.
And this was done in the mid-nineties with a single frequency; more recently it's been
done with swept frequencies. But all this work you have to actively produce a timevarying electric field and then sense how that field changes. Static electric field sensing
is different in that you passively sense the existing static or DC electric field that already
exists between our body and the environment. So I'll try to explain this with this field
diagram here. These red lines represent the electric field. The closer the lines are
together, the strong the field. So it's strongest at the feet because you're really close to
ground. And I'm going to represent the environment around the user as brown. So if I put
a sensor on the user's wrist, there are two new electric fields here. There's one between
this local ground plane that's on the sensor and the body and between that local ground
plane and the environment.
So I can represent this with a simple three-capacitor model to represent the capacitive
coupling between all these nodes. So there's the capacitive coupling between the body
and this local ground plane, between the local ground plane and the environment and
between the body and the environment. Now if I measure the voltage between the body
and this local ground plane, I can see that it's a function of the charge on both sides as
well as the capacitive coupling on both sides to the environment. Now this is interesting
because if the user, for example, lifts their leg they're going to change that capacitive
coupling which is this parameter CB and, therefore, this voltage will change. So
essentially there are changes in this voltage whenever the user moves.
This is a really easy way, a low power way of sensing whole body motion. And kind of
the nice thing about this is the hardware is quiet simple and, therefore, can be made
very low power. So we need contact to the body, some gain stage and then a low pass
filter. Now the reason we have a low pass filter is because we know from Humantenna
that there's signals at 60 hertz and all the harmonics; we want to filter all that out. We're
only really interested in this very low frequency signal, less than about 10 hertz. And so I
built this into a wrist watch form factor which – I can pass this one around. And we
began to explore like how is this useful? And I'm going to show you a video in which we
actually compare it to an accelerometer and give you data that's very similar to that of an
accelerometer. So in the background here you're going to see the top trace is the output
of the staticelectric field sensing and the bottom three traces are the three axes in an
accelerometer.
And what you'll notice is as I move my wrist you see this sinusoidal wave pattern on both
the staticelectric field sensing as well as the accelerometer traces. So it's can't give you
the fidelity of data that an accelerometer in that it can't tell you your motion in three axes,
it can't give this in real units of acceleration but it can give you some of the same kinds of
data about how much you're moving and maybe where you're moving. And so one of the
real advantages, why you might use this over an accelerometer is power consumption.
So this is plotted on a log scale, and you'll see that the staticelectric field sensing device
is actually about two orders of magnitude or a hundred times lower power than some of
the best commercial accelerometers for this application of sensing human body motion.
And it's even an order of magnitude lower power than the best research accelerometers
right now. And what's even more impressive is that this was built just using off-the-shelf
discrete analog components. If you were to make an integrated circuit like is done for
these accelerometers, you could bring that power consumption down another two to
three orders of magnitude. Yeah?
>>: So I see all your demos are in front of the big TV [inaudible] and environments
where there are likely to be strong electric fields. If you go off into the middle of a wet
field barefoot, you think this would work?
>> Gabriel Cohn: I did just that. I went into – I wasn't barefoot, I guess. But it was a wet
field. It's 60 acres over in Redmond. I went into the middle of field and there are no
power lines anywhere around. You don't see 60 hertz there; you're far enough from
power lines. But you still see the staticelectric field sensor signal. So basically this is
happening because your body and the environment are at some different potential so
there's going to be some field. And you're just sensing changes in that field. So it'll
actually work regardless of the environment. I also did some interesting experiments in
screen rooms and anechoic chambers and trying to suspend myself from the air. Desney
has videos of me hanging from the ceiling trying to test what matters. Is it your
grounding? Is it the environment? And it actually doesn't matter. It's always there.
>>: Do you think barefoot would matter, though?
>> Gabriel Cohn: So barefoot will change things and it'll change the amplitude, right?
You're still going to be at slightly different potential. If you truly have an ohmic contact to
the ground then yes, you won't see much. That's a little difficult to produce. But, yeah,
the amplitudes are significantly less if you're barefoot than if you have shoes on as you'd
expect. Yeah?
>>: I think these numbers may be a little misleading. So when you integrate it into a
sensor, it's not the accelerometer or the sensors that dominates that energy
consumption, right? It's the radio of the microcontroller and everything else. So
optimizing this [inaudible] to the order of magnitude cheaper may not give you much to
the end device that [inaudible].
>> Gabriel Cohn: So it depends on the application. I'll actually, in a few slides, show you
a use case where this actually has an advantage. So you can ask again in a few slides if
you're still confused. Let me move on. So one of the other advantages in addition to
giving data that's similar to an accelerometer is it can also give you data that an
accelerometer just can't. For example, if you wanted to use an accelerometer to
measure not just how much my arm is moving but how much each limb is moving, right
now you have to use a number of accelerometers. And this can be rather cumbersome.
So with the sef sensor you can actually do this from a single device. So here I'm going to
show you another video, and I'm going to hold my arm steady but move my leg. So of
course my arm is held steady so you'll notice the accelerometer sees nothing. The sef
sensor still sees a strong signal because I'm still going to change the capacitive coupling
between my body and the environment.
And to answer your question, the reason this might be useful is actually to use it in
combination with other sensors, like accelerometers, that people have begun using for
activity recognition and to use it for ultra-low power wake up. So you can imagine you're
using your accelerometers for some application, activity recognition or whatever, and
you want to turn them off to save power when the user is not moving but you don't know
when to turn them back on. And so you could use this sensor and do the threshold in
hardware with two comparators to generate this wake-up signal. And this whole thing
consumes only 6.6 microwatts so about the same power as a microcontroller in sleep.
Yeah?
>>: So just for comparison again, with the static field sensor do you always require the
machine learning and training session for it to understand [inaudible]?
>> Gabriel Cohn: No. So I actually at all on this project about machine learning. So I
actually don't have this in the slides but I did say, "All right, if we're going to use machine
learning can we classify different types of activity?" And you can course different types of
activity, so no motion versus maybe hand and finger motion versus arm motion then
whole body motion like walking. And you can do that machine learning relatively simply.
What I was saying is you can actually forget about the machine learning and use this for
some of the same applications. People are using the FitBit and Nike Fuel Band just as a
pedometer and you can do that here without machine learning. Yeah? Oh, no? Okay.
All right. So let's move on. So I talked about some of these on-body application in which
I'm going to leverage the conductive properties of the body in order to either reduce the
number of sensors required to enable human-computer interaction or to reduce power
consumption. And now let me talk about off-body sensing which instead of leveraging
the conductive properties of the body, I'm going to leverage the existing infrastructure in
the environment. And I'm first going to talk about a project called GasSense which is
actually the first thing I did in grad school. And the point of GasSense was to measure
the gas consumption at the device level. So this is in your home. All your appliances that
consume gas, which are ones are consuming gas and how much gas are they
consuming. Now the reason you might care about gas consumption is actually gas
consumption is quite important. There's been a big focus lately on electricity monitoring,
and people have developed ways of doing this kind of disaggregated sensing for
electricity.
But it turns out that if you actually look at the average home, the majority of the energy is
consumed in natural gas and not in electricity. So just like electricity we want to get gas
consumption at this device level. So just like the human-computer interaction work there
are direct sensing ways of doing this. You could actually put an in-line gas sensor on
each of your appliances that consume gas. There are a few problems with this: if you
have a lot of appliances, it could be difficult to install these. But the real issue is we want
something that the end user can install. And you can't have the end user running around
their house cutting into gas lines. It's really just not safe. So, we need to find a more
indirect way of sensing this.
So we're going to utilize the existing infrastructure, take advantage of the fact that all
these gas appliances are connected to each other on this gas line. So if we sense
something about the gas line, we can figure out what's going on at the end points. Now
we still have this problem that we can't just cut into the gas main; that’s even more
dangerous. So we need to, again, take an indirect sensing approach. The approach is to
actually look at your gas meter. So if you've ever walked around the side of your house
and walked by the gas meter you may have noticed that it makes a sound. A residential
one makes a hissing sound and a commercial makes somewhat of a low roar. So I
realized there was a sound and I studied what was going on. It turns out this sound is
produced by this disk here; that's your gas regulator. And let me describe briefly how it
works. Here's the cut-away of the regulator.
Gas flows through this pipe and as it does a sound is produced by this large resonant
chamber. This is similar to blowing through a whistle. As you blow air across the top of a
whistle a sound is produced by that resonant chamber below. And what's nice about this
is the amplitude of the tone that's produced is a function of the flow of gas. It's actually a
linear function of the flow of gas. And so we can pretty easily sense this simply by
putting a microphone outside the gas regulator. Similar to some of my other work, I'm
going to go through this pretty quickly. This is a noisy signal. We, again, need to use
some signal processing to pull out the signal we want. This is a spectrogram with
frequency on the Y axis, time on the X axis and the color encodes the amplitude of the
audio data. And you'll notice there are lots of noise sources here. We have cars driving
by, wind noise, even an airplane flying overhead. And what we care about is just the
signal right here that's produced by the regulator when gas flows.
So we can filter some of the signal out and then apply some machine learning, again, to
determine when each gas appliance turns on and off. And we actually show that you can
do this with about 95 percent accuracy to determine which gas appliances are being
turned on and off at what times. So I went through this really quickly because it's using a
lot of the same signal processing and machine learning techniques that I used with
Humantenna. But the idea is that instead of leveraging the conductor properties of the
body, we're going to leverage the existing infrastructure in the home in order to enable in
this case disaggregated gas consumption. Now this technique is quite nice in that we
can get a lot of information about what's going on in the gas infrastructure by putting a
single sensor somewhere on the infrastructure. The problem is we can't always rely on
this technique.
For everything we want to sense in the home, we can't always take advantage of
existing infrastructure. So sometimes we actually have to put sensors everywhere, use
what's called distributed sensing. And for this people typically use wireless sensor
networks. So imagine, for example, you want to know the temperature in every room
across your house. The best way to do this is to put a temperature sensor in each room
across your house. So let's say this is your house and you're going to put these sensors
all over the place. Typically you make these things wireless and have them
communicate their data back to the space station receiver. Now people do this because
it's really easy to deploy. You put the sensors out there, you turn them on, the system
works. The problem is, it's really hard to maintain. And the reason it's hard to maintain is
because all these wireless sensor nodes have batteries. And for any decent size
deployment, the vast majority of time and resources are spent replacing batteries. So we
really want to reduce the power consumption so these batteries last longer.
Unfortunately there's this well known trade-off between the power consumption and the
range of a wireless system. So imagine this is your home and you put this space station
receiver in the center of the home, and you're going to put the sensor node in this corner
bedroom here. You can turn it on at high power. The signal will make it to the base
station but your battery won't last very long. If you want your battery to last longer, you
can turn down the power consumption and now your battery will last a lot longer but the
signal won't make it all the way to the receiver so your network is broken. So there's this
trade off between power consumption and range. And so any wireless sensor network
kind of tries to find the sweet spot in this tradeoff. So I'm going to present a project called
SNUPI which actually find a work-around that doesn't have to deal with this tradeoff
between range and power.
And SNUPI is actually an acronym which stands for Sensor Network Utilizing Powerline
Infrastructure. And it's an ultra-low power, general purpose, wireless sensor network.
And unlike existing wireless sensor networks and similar to some of my previous work, it
actually utilizes the powerline infrastructure, not the gas lines in this case but the
powerlines. And it uses a technique called powerline coupling in which the receiver is
actually plugged directly into your wall, into the powerlines and uses the powerline
infrastructure as a giant receiving antenna. And so your sensors, your wireless sensor
nodes can still be wireless and you can put them anywhere. But instead of sending their
data wirelessly over the air all the way to the base station receive, they wirelessly couple
to the nearest powerline. And the signal moves through the powerline to the receiver.
So let me show this graphically. So again we're going to put this base station receiver in
the center of the home, but it no longer has those two little antennas on the top. Now
we're going to plug it into the powerline infrastructure and use all the powerlines in the
home as a giant receiving antenna. So, again, if we put this sensor node in the corner
bedroom, we can turn it on at very low power. Not nearly enough power to reach all the
way over there to the base station but enough power to couple its signal onto the
wireless infrastructure and the signal will move through the powerlines to the receiver.
So in this way you can put sensor nodes all over the home and have them all
communicate to this base station in this star network topology.
Now the reason you might do this is that you still get whole-building range because
powerlines go everywhere in the building but you can dramatically reduce the power
consumption at each sensor node because its wireless range is now much shorter. So
here I'm doing a comparison between the first research version of SNUPI and existing
sensor nodes including ZigBee and Bluetooth nodes. And again this is on a log scale.
The first thing that you notice is that the total power consumption of the SNUPI system is
about an order of magnitude lower power than the existing nodes. What's even more
interesting is that the communication power, the power consumed by the radio is about
two orders of magnitude lower power than existing sensor nodes. And one of the
reasons for this is that we're using this powerline coupling technique. But the other
reason is that we're taking advantage of a symmetric network. So the sensor nodes are
transmit-only. They can't receive.
So, we can push all the complexity of the network to the receive side, and this way we
can produce simple, low-cost, and ultra-low power sensor nodes at the expense of a
single, complex base station receiver. Now there are a number of advantages of this in
terms of power and simplicity but also a number of disadvantages in terms of reliability
and robustness of the network. So one, there's no receiver on the node but this means
we can't get acknowledgements. The node will never know if the data has made it to the
receiver. There's also no coordination between the nodes so no synchronization, no
scheduling, no routing. But also if the data doesn't make it, it can't automatically
retransmit because it doesn't know the data didn't make it. On the plus side there's no
overhead to do all this coordination.
But there are two big problems here: one is that we don't know that are data makes to
the base station and if it doesn't make it, we don't know to retransmit. So to address this
we use multiple re-transmissions for important data. So the data is important that it
makes it, we'll transmit it multiple times. And we'll use forward error correction, again, to
increase the probability that the data makes it. So we'll never know for sure if the data
has arrived at the receiver but we can actually retrieve an arbitrary level of reliability via
statistical proof. Now using this powerline coupling channel has a number of advantages
in that it's actually very complimentary to existing wireless systems. So an existing overthe-air wireless system actually works best in an open field where there's nothing around
to attenuate the signal. In an indoor setting this means it works best in the center of a
large room. This is actually the location that SNUPI works the worst because it's farthest
from powerlines.
But if you take a scenario where you want to put a sensor, for example, under a large
appliance like a refrigerator or maybe even inside the refrigerator, over-the-air wireless
systems don't work very well because they're inside a metal box or underneath a metal
box. This is actually the location that SNUPI works the best because it's right next to a
large appliance that's plugged into the powerline. So in that way they're very
complimentary. But this new powerline coupling channel is unexplored so there are a lot
of questions that come up in terms of how do you design and build a system that works
in this way that's partially wireless and partially using the powerline as a transmission
line. So my thesis is focused a lot on trying to explore this channel. So I did a lot of
background noise and interference measurements around homes, around Seattle.
Actually, I didn't bug you guys about this; these are other people I know in Seattle.
And did a lot of path loss measurements in homes as well. And, although, this is
relatively backbreaking work crawling around underneath people's sinks and putting
sensors, it was actually very useful in trying to understand the system in terms of the
frequency, bandwidth and so on that you can use this network. So I'm going to go
through some of these really quickly. First there's the frequency of operation. Powerline
coupling actually works best in the HF band so between 3 and 30 megahertz. This is a
wavelength of 10 to 100 meters. And the reason for this is because in this frequency
band the size of the powerline infrastructure is actually roughly an efficient antenna. So
it's in this band that the powerlines more efficiently pick up or receive this coupling signal
from the nodes. Also in this band the powerlines don't attenuate the signal very much.
They work well as a transmission line. Now operating in this 3 to 30 megahertz band has
a number of limitations in terms of bandwidth. So if you look at the unlicensed ISM
bands the SNUPI can use, we have only about 10 to 100 kilohertz of bandwidth
compared to the 10s to 100s of megahertz of bandwidth that traditional wireless systems
have.
Now what this means is we can't divide our channel into subchannels. So all the sensors
share the same channel, and again they don't have receiver on them so they can only
transmit. So we have to pure aloha where each sensor just sends data as soon as they
have it. And what this means is collisions can happen. So in order to minimize the
probability that collisions will happen, we want to keep the transmit time as short as
possible so each node is occupying the channel as little as possible. We also need to
vary the time between transmission. So since a lot of these things happen on a
schedule, we don't want a number of sensor nodes to end up on the same schedule and
always collide. And then, like I said before we can use multiple re-transmissions to
increase the probability that important data gets through and use forward error correction
again to increase the probability that data can be recovered if there is noise for example.
Now another challenge is the antenna design. Because we operating in this 3 to 30
megahertz range, an efficient antenna is on the order to 10 to 100 meters in size. So if
you want to make a device that's this big you're not going to have a very efficient
antenna. And so, I did a lot of work trying to make as efficient as we can a small
antenna. So I actually built an engine that actually estimates the antenna parameters
using electromagnetics theory and then you can actually solve a constraint optimization
problem for a certain and size and shape of the node and it will tell you what's the most
ideal antenna either with wire-wrapped antennas or PCV antennas.
So then the next thing is actually the transmitter itself. So because we are really
interested in making this as low power as possible, we wanted to optimize the power
consumption of the transmitter. So we actually partnered with an analog integrated
circuit lab at UW to produce this custom integrated circuit. And the advantage here of
doing this in full custom analog is that we can reduce the stray capacitance and,
therefore, reduce the power consumption. And so we could actually achieve whole-home
range with this sensor network while consuming only 65 microwatts of power while
transmitting. So this is that 2 orders of magnitude lower power than some of the existing
sensor networks. Now the last challenge here is in terms of the receiver design and this
forward error correction, and this depends a lot on the channel itself. So because we're
using the powerline as this receiving antenna, we actually have some challenges in
terms of the receiver design. And that is this antenna was not designed to be an antenna
and it actually changes over time. So as people flip on light switches and things the
impedance to the powerline changes. There are all kinds of noise sources on the
powerline because of appliances that we need to do deal with.
And this really affects the forward error correction. So we wanted to develop forward
error correction that works well in this noisy environment which is not a traditional
additive white Gaussian noise or even relay fading channel. You actually have power
bricks that are producing noise synchronous to 60 hertz. The noise floor changes large
all the time because of appliances, and trying to produce error correction code that can
handle this and make the system robust has been another focus of my thesis. Now once
we understand all these things the next question is, what are the applications of SNUPI?
Well, first we need to consider the limitations. So for one it requires powerlines so this is
not the sensor network to deploy in the Amazon; it won't work. It really needs to work
indoors. So it's best of home and commercial applications. But like I said before what's
nice is this is actually a problem domain for traditional wireless sensor networks. So
walls traditionally attenuate wireless signals and so they're typically a problem. In our
case walls are good because within the walls are those powerlines that we need for our
network. The other limitation is a low bandwidth that I talked about before. And because
it's low bandwidth, this is not a sensor network to use for streaming media audio or
video, it's really for event detection and low rate monitoring. And in this application
domain we can actually achieve greater than 10-year battery life. And in fact that 10-year
figure is the limitation of coin cell batteries. So the shelf life of a coin cell battery is about
10 years.
So the battery will actually corrode before we've pulled all the energy out of it. Now
because this works best for these low rate indoor applications, it's really best for in-home
monitoring so home environmental sensing, smart home applications and home security
where you can use these low power simple sensors like temperature, humidity, moisture
and so on to get an idea of what's happening inside the home. So to enable these
applications some of my co-inventors and I created this startup company, SNUPI
Technologies in which we actually released our first product Wally Home about three or
four weeks ago now. And so one of the nice things about this company for the research
point of view is the large data set that we have achieved.
So now we've deployed these things all over the U.S. This is not a map of where we've
deployed SNUPI; this is actually just what it looks like from space. But we've put them all
over the U.S. We now have several hundred systems out there, and we've actually
gotten a lot more data about what this noise looks like on the powerlines and homes all
over the place. And it's really informed a lot of the design in terms of how do we deal
with this noise that we couldn't see in the 10 homes we tried in Seattle. So in summary
SNUPI is this ultra-low power wireless sensor network. Let me actually pass around two
of these. So this is the original research version of SNUPI. This is the commercialized
Wally sensor node. I also have one of those Wally's that you can take apart; you can
come up afterward and play that if you want. There's also the receiver but it's not
packaged. You can look at that too.
All right. So I've talked a little bit about some of my work in both on and off-body sensing,
but looking forward I want to continue to identify these new opportunities for sensing,
build new embedded sensor systems then apply these sensing systems to new
domains. In particular I've become very interested recently in a slightly different domain
of on-body sensing, and that is the domain of health and wellness. So this is work that I
did here last summer on continuous noninvasive hydration sensing. Desney captured my
excitement about hydration with this wonderful picture here on my last MI internship last
year.
And the reason I become really interested in hydration sensing lately is because there
was a study last summer which actually showed that 75 percent of Americans are
chronically dehydrated. And this is actually quite a shocking number. What's even more
surprising is if you actually look at the symptoms of chronic dehydration. The symptoms
are things like allergies, low energy, depression, hunger and digestive problems. And so
these things are often masked as other things, but often it is caused by chronic
dehydration. And one of the things that people don't understand is that a lot of the
beverages they consume throughout the day actually dehydrate them more. So things
that are loaded with caffeine, sugar and alcohol actually have a dehydrating effect. So
obviously it'd be very useful to have a device that monitors hydration.
The problem is right now the best way to do this is blood tests or urine tests. This is both
invasive and cannot be done continuously. You could also imagine tracking the flow. So
they actually make these water bottles that tell you how much you've been drinking. The
problem is to accurately monitor hydration you need not just not just inflow but also
outflow. And this is actually quite difficult because there're a number of ways in which
our bodies lose fluid throughout the day. So it would be very useful to have a sensor
which continuously and noninvasively monitors hydration. So similar to some of my other
work on-body, I'm going to take advantage of the fact that the body is a conductor and
use that to sense hydration. So the approach I'm taking is using bioimpedance analysis.
So this is something that's been done in clinical practice for quite some time but for a
different application, for actually monitoring body composition.
So you can actually buy a scale now that tells you not only how much you weight but
what percent body fat you have. And if you look at the clinical studies using this
bioimpedance analysis they say it works fairly well for this assuming constant hydration.
And so my hypothesis is that if we instead of looking and taking these measurements
daily or a few times a month, we actually look on a timescale of minutes to hours then
changes in bioimpedance should be a function of changes in hydration rather than
changes in body composition. So in order to test this, I built this large circuit which does
this bioimpedance spectroscopy to monitor hydration throughout the day. And we've
started doing some studies. This is my internship last year where I would continuously
monitor my bioimpedance throughout the day while very closely monitoring my hydration
as well.
And this ongoing work so I don't have any exciting results yet, but the idea is that we
could make like a wrist-worn device, like an arm band that can continuously monitor
hydration just for average consumers to get an idea of how dehydrated they are. We've
also talked to some physicians about using this during surgery to monitor hydration
which is something they're very worried about during surgery and right now they
basically guess. And I think it would also be very useful for medical researchers to study
chronic dehydration. So that 75 percent of Americans number is actually somewhat
debated because they can't really measure chronic dehydration, so with a device like
this people could actually study chronic dehydration and learn what are the effects of
this. So in addition to monitoring hydration, I want to continue to work in the space of
continuous noninvasive health sensing, but I also plan on working on other applications
of embedded sensor systems including technologies for accessibility and improving
healthcare in the developing world.
So by continuing to identify, build and apply some new embedded sensory systems we
can reduce these current adoption barriers and hopefully truly bring ubicomp to life. And,
again, this requires not only doing more research in each of these subdomains but doing
multidisciplinary research that cross the traditional boundaries between the human
interface, the sensing, the computation and the communications. And I hope to be able
to collaborate closely with the experts sitting in this room as I have done for the last
several years to help develop the sensing technology of the future. So I'm going to thank
everyone I've worked with, many of which are here, over the past few years on these
projects and open it up if you guys have any other questions. Thanks.
[Applause]
>>: So I currently have a smart watch in my office that does hydration sensing but
frankly I don't think it does it very well. I think it's just measuring sort of ambient dryness
in my skin or some such thing. Can you comment on sort of how accurate [inaudible]...
>> Gabriel Cohn: Yeah, so what I'm doing is a little different. I'm not actually looking at
the skin hydration. By doing this bioimpedance analysis, at the moment anyway across a
larger area of the arm, we're actually looking at the hydration inside the arm. And the
other thing I'm doing is instead of looking at a single frequency, by doing this
bioimpedance spectroscopy and looking across frequencies we can actually tease apart
the difference between intercellular fluid and extracellular fluid. So it turns out as you
might know hydration is actually rather complicated. There's no medical dehydration.
What we really care about is the fluid in a number of different compartments inside the
body as well as the sodium and potassium concentrations. And by doing this continuous
monitoring of both intercellular and extracellular fluid, we can also see the shifts in the
fluid between those compartments which can give us some idea of those ion
concentrations as well.
So I think with such a device you could actually get a lot more medical relevant data
about dehydration.
>>: One of the issues in that realm is always the time it takes from replenishing and it
showing up at some distant site.
>> Gabriel Cohn: Exactly.
>>: [Inaudible] that lag.
>> Gabriel Cohn: And I think that's also something that would be kind of interesting for
the average every day consumer to understand like, "When I drink how long does it take
before my body actually gets that hydration?" Yeah?
>>: I mean, you mentioned that you foresaw some applications of embedded sensors to
accessibility. I was just wondering if you could expand a little [inaudible]...
>> Gabriel Cohn: Yeah, so I mean if you look at kind of the HGI community in general
right now, we've done a lot of interesting work on using hands-free interfaces and things
like that. And I think it would be really interesting to take a lot of these technologies
we've developed for HGI and move them into the accessibility realm where the problem
is a little different. Dealing with errors is a lot more difficult. And trying to actually take a
lot of this new research and produce devices that can help in accessibility. So I haven't
done too much diving into that yet but I kind of see there is this big potential there to
enable a lot of technology for accessibility.
>>: So you mean accessibility for motor impairments?
>> Gabriel Cohn: It could be motor.
>>: Because your picture showed Braille.
>> Gabriel Cohn: I could be vision as well. There has been a lot of work in the
community on eyes-free interfaces. And so can some of those be applied to accessibility
is one big question. And the design constraints are different with blind users. Yeah?
>>: For the micowattage power levels you were talking about for SNUPI, what's the
bandwidth we're talking about [inaudible]. You said something like 50 microwatts.
>> Gabriel Cohn: So the bandwidth – It depends on – You're partially limited legally by
unlicensed bands. But if you kind of ignore that, I would say it's around 100 kilohertz,
200 kilohertz, hundreds of kilohertz basically.
>>: Kilohertz? [Inaudible] like roughly you – [Inaudible] the kind of data you can move
over the wire. You know what I mean?
>> Gabriel Cohn: Mmm-hmm. Then, I think...
>>: Like move a string every minute? Like you know that...
>> Gabriel Cohn: Yeah, you can move a string every minute. So I think you're going to
be more, I'd say, on the order of 1 kilobit per second.
>>: That's pretty good. Okay.
>> Gabriel Cohn: But if you were to do that – So part of the problem is it depends on
how many sensor nodes you have. So you can maybe have one that's doing that
continuously but if you have ten of them maybe you get a tenth of that rate for the
network. Yeah?
>>: How effectively is your powerline able to jump between phases? In residential
homes it has two phases. In commercial it has three phases?
>> Gabriel Cohn: So one of the nice things about the HF band, 3 to 30 megahertz, is
that actually couples quite well. So 3 doesn't couple quite so well but basically above 10
megahertz couples very well across the phases. So that's actually not a problem. A lot of
the existing work on powerline communication had that problem initially, like X10 for
example. And they were using 125 kilohertz. So that was their problem. The newer
systems like in some of the new INSTEON systems actually use the same band, 3 to 30
megahertz and they don't have that problem any more.
>>: Would there be a potential between INSTEON and SNUPI then?
>> Gabriel Cohn: So I mentioned the forward error correction and because we're
making a commercial product a lot of the challenge is how do we make it work with
Insteon, with these existing systems. And it does work actually quite well alongside that.
There is more packet loss but we handle it. Yeah?
>>: So how far does it go? [Inaudible]. I remember as a kid using X10 devices in my
house to control my neighbors house.
>> Gabriel Cohn: Well, in a home it doesn't go very far because it typically doesn't make
it through the meter. There's nothing [inaudible] typically in a meter. It doesn't couple
across it. Certainly even if it does go through the meter, it won't make it across the pole
top transformer. In an apartment building or a condo then, yes, it will – your neighbors
will see your signals as well.
>>: So when you calculate your bandwidth then you have to take into account all your
neighbors.
>> Gabriel Cohn: They're part of the same network, yeah. Exactly.
>> Densley Tan: Going once. Going twice. Thank you, sir.
>> Gabriel Cohn: So I have...
[Applause]
Download