Document 17888323

advertisement
>> John Boylan: [laughs] I’m John Boylan, and I coordinate this series as well as some … as the other
Studio99 efforts with some of the Studio99 stalwarts. This afternoon, we have Thomas Deuel, who is a
neurologist, neuroscientist, and sound artist—currently works with the Swedish Neuroscience Institute
as well as being an affiliate professor at the University of Washington’s DXArts, digital and experimental
arts program—and we’ve brought Thomas to talk about the proje … the work he’s been doing using
encephalogram … encephalograph …
>> Thomas Deuel: Electroencephalogram.
>> John Boylan: … electroencephalograph to generate sound and music in a term he’s calling the
encephalophone. So anyway, that’s about it for me. Thomas …
>> Thomas Deuel: Thanks. So I’ll be talking to you about the encephalophone. So it’s a musical
instrument that I have been working on that’s driven by thought and using EEG or
electroencephalogram. I’ll tell you a little bit about EEG in general and how it works; I’ll talk a bit about
the history of people taking EEG and turning it into sound and music; and then, I’ll talk to you about my
device more specifically; and then I have some videos with demonstrations. And feel free to ask
questions along the way. I’ll get fairly technical; I can get more technical; I can get less technical; stop
me and feel free to interject. So just relevant personal background—John touched over this a little bit—
but the things that brought me to this place to create this device are: my main job is I’m a neurologist;
I’m with the stroke team at Swedish Neuroscience Institute; and I also read EEGs for them with the
epilepsy group. I have a PhD in Neuroscience, and most recently, as a … in my post-doctoral work, I
been working on processing of complex sound and specifically, music—the cognitive process. I also
amateur musician, and I have a certificate in music composition from New England Conservatory, and I
practice as a sound artist, mostly doing audio installation artwork and field recordings. And then, I have
a studio slash laboratory; I just call it a studio-lab, ‘cause I don’t … it’s really halfway between laboratory
and art studio—kind of combines these things—at Inscape Arts, which is in the I … international district
here. I’m also an affiliate professor in digital arts and experimental art—the DXArts program at UW—
and I teach a class called Art and Brain, and there’s some Art and Brain laboratory that we have there as
well.
So that’s a whole lot of parts, and they … all these parts kind of come together in this … the
development of this device I call the encephalophone. So it’s a novel musical instrument; it generates
musical notes from conscious control without movement in real time; it uses EEG, which I’ll talk about
some more; and it uses intentional control, which is different—as I’ll show you—than people’s
developments in the past with EEG; rather than sort of a passive generation of music, you can actually
control it consciously and intentionally in real time. So essentially, you control you … the music with
your thoughts, and I have a patent pending on the device.
So EEG: it’s a medical diagnostic technique; it reads electrical brainwaves. Brainwaves would be a
colloquial term, but it’s just reads the electrical activity of the brain. It uses metal electrodes and a
conductive gel placed on the scalp. That’s me with a EEG cap on. It was developed to diagnose
epilepsy—and it’s still how it’s used primarily—and sleep disorders, and it’s also used increasingly in
neuroscience research. So its advantages are its very fast response—so millisecond response—it’s also
noninvasive in this format—you put a cap, or electrodes with gel, on your head—and it’s very
noninvasive. There is invasive EEG, and we do that in the hospital for interactible epilepsy patients,
where we’ll cut their skull open and place electrodes, either implanted or placed directly on the brain,
and the signal’s much better than it is here, but I wouldn’t go and do that to you to help to … just to
generate music. But in general, this format—the scalp EEG—is very noninvasive; so I don’t have to inject
any drugs or give you any radioactive tracers to measure. Disadvantages are the localization’s pretty
poor; we can kind of tell what general region of the brain, but specific localization is not very good; we
can’t get down to millimeter resolution—not until we get intracranial, when we cut the skull open—but
in this format, not great localization. And also, there’s a lot of noise from non-brain signal, so when
we’re listening to the—I use sound metaphors a lot—but when we’re listening to the brain, electrically,
at the scalp, it’s quite quiet—we’re talking about microvolt signal; two to a hundred or so microvolts are
the typical ranges—around there, there’s all kinds of non-brain signal that can drown out EEG, so you
have to be very … you have to listen very carefully. It would be like being in a loud party, and you’re
talking to somebody who has a very soft voice; you have either get in another room and get all the other
people out of the way or get really close to them and just tune everything else out to … in order to hear
it. So that’s a big disadvantage of EEG—scalp EEG.
So what it’s measuring it synchronous activity of many neurons —many nerve cells—at once, not one or
two but a thousands; that’s what’s required to measure at all. If you have a hundred neurons, you’re
not going to measure that at the scalp, and it’s estimated that you need approximately six square
centimeters of surface of the cortex in order to actually get a signal. To top that off, these signals are
volume conducted through CSF—the cerebral spinal fluid—and the meninges—the coverings of the
brain—and the skull, and then the scalp, and skin, and all that. So that attenuates the signal; so then,
when … by the time you get out here, the signals that you’re measuring are in the range of two to two
hundred microvolts. Then there’s all these non-brain signals that I talked about that can cause artifact,
and they can cause misinterpretation—and they often do—of EEG signals. So muscle, for one: if you—
you know, the muscles of mastication, of swallowing—if you all bite down, you notice there’s the
masseter right here—pops out—and then even the temporalis—it’s this muscle all the way up here in
your … if you bite down, you can feel it—all the way up here, there’s some muscle activating; that’s
millivolt range—so a thousand times stronger signal. So if you’re chewing, you’re talking, you’re biting
down, it’s gonna drown out the EEG signal. Other things: there’s the orbit of the eye, so when you blink,
and you move your eyes around, there’s a big dipole, and that gets measured; so that’s another source
of error. The tongue itself has a big dipole, so if you’re talking, if you’re chewing, again, it’ll cause noise.
So where are you measuring them? You’re really measuring—when you’re measuring at the scalp—
you’re measuring voltage potentials of charge movement across these neuronal membranes, across
these cells of many neurons at once—again, on the order of thousands are required at minimum. It’s …
the recorded EEG is generated by the cortex, so it’s only the surface of the brain—you’re not measuring
deep structures at all—and to top it off, the signal needs to be perpendicular to the scalp. So there’re all
kinds of nerve cells that are pointing in different directions, and you’re only measuring the ones that are
pointing to the scalp, and you’re summing all these things called the EPSPs and IPSPs—they’re postsynaptic potentials; you’re not measuring action potentials, which are what people often think of when
they think communication between neurons. So why is all that important? Well, the thing is: you’re
only … you’re really only measuring a subset of the whole electrical activity of the brain; you’re … of a
small subset, so that makes interpretation difficult.
So we use this international system for placing the electrodes so that everywhere in the world, it’s the
same; if I read an EEG from Istanbul, or from Shanghai, or Wenatchee, I’m gonna have the same format
come out; so I’ll know what electrode is what part of the brain. So we use anatomical landmarks; so the
inion, which is this little—you can feel it in the back of your skull—there’s a little protruberence out
there in the occiput; it’s … that’s the inion. And then the nasion is this low point in the nose, where
people’s glasses, the bridge is. All the electrodes are placed with percentages of the cir … that
circumference—or half-circumference—and it’s called the international ten-twenty electrode system,
because they’re ten and twenty percents of these various distances. Even numbers are on the right; odd
numbers are on the left; and then their regions of brain are labelled as well; so F is frontal; C is central; T
is temporal; P, parietal; O, occipital so that you have a … you know where you’re at, and this is the very,
very standardized-around-the-world system. Then you have electrodes; each of them are measuring
voltages; and then, as I’m sure you all know, but most … you measure voltages as potential differences;
so it’s something … it’s a voltage relative to another voltage; it could be relative to ground; it could be
relative to another electrode. So these different montages that you use: you can measure one next to
another; so bipolar montage is very popular one; you’re measuring the voltage of one electrode to the
one adjacent to it; and that’s what we will mostly be looking at. You can do a referential, where they’re
all related to one; so you can have a gr … sort of a … it’s not truly a ground, but a reference on the
forehead or on the neck; and all the other electrodes are referenced to that.
This is what a normal EEG looks like; this is … well, one of the bipolar montages. So what we’re looking
at is a whole lot of squiggly lines, but going across the page is time—so on the x axis, that’s about ten
seconds going across—and each one of these essentially represents another … a single electrode;
they’re one relative to the other, as I mentioned, but this is about twenty electrodes going across; and
as you can see, there’s some periodic activity; there’s some regular waveforms that are coming in a
periodic fashion; they’re not just … there’s some variation, obviously, but you can see some regular
frequencies coming in there. So we take all that, and we divide it up into these frequency ranges; the
slowest—they’re pretty arbitrary, honestly; they’re just categories of frequency ranges to help us sort
them out—the slowest are called the delta, up to four hertz; and then four to eight is theta, and then
alpha, and beta; and there’s gamma, which is even faster; and the general rule is: as you go down to the
slower, that’s the more sleep, and more … less wakefulness, and less attentiveness; and as you go up,
it’s more attentive, and that part of the brain is more active in general. That’s the general rule; there’re
exceptions.
This is another normal EEG, and there’s some interesting patterns here; on the left—again, it’s the same
setup, so this is about ten seconds, and about … each one of these squiggly lines is a different
electrode—on the left, the subject has their eyes open, and then they close their eyes, and these are
some normal patterns. So with their eyes open, we have these things with the black arrows, called
lambda waves; those are waves that appear in the visual cortex in the back of the brain when it’s very
active, and it’s looking around. So if you’re reading a book, or scanning, or looking up at the ceiling,
those lambda waves will come out; it means your eyes are open, and you’re scanning something
actively. And then, there’s—you can probably see a big dip right in the middle, not the red arrows, but
right to the left—that’s the eye; so that’s the artifact I was talking about; and the eye closes. Every time
you close your eyes, you have something called a Bell’s phenomenon, and your eyes actually go up. So
you close your eyes, you blink—even when you blink—your eyes go up, and they come back down; it’s a
reflex; you can’t help it. You may not believe me, but it happens. So as that goes up, the dipole of the …
between the cornea and the retina causes a big deflection, so that’s a source of error—that big dip—but
then after it, you see this fast alpha frequency, so it’s around ten hertz there—that’s the red arrows. So
that’s, again, then visual cortex, but now with the eyes closed, it’s not … no longer receiving input like it
was over on the left, and it’s resting; it’s sort of a holding pattern; it’s that part of the brain saying, “I’m
here; I’m ready; I’m awake, but there’s no input.” Even if you’re thinking, you’re not gonna … that
should … you’re thinking about imagery. So people ask this often: if your eyes are closed, and you’re
just imagining things, you still get this resting; you won’t get that active scanning type of pattern.
Other normal patterns: there’s something called the P300; it’s an event-related potential. So this is a
pattern that … it basically recognizes novelty; it could be a visual novelty; it could be an auditory novelty;
it could be tactile. So all of a sudden, a elephant appears, and you’d get a P300, ‘cause it’s new; you
haven’t seen this elephant before. And then I go back, and I show you the elephant again, and you get
another P300. Around the third time, the P300’ll go away, ‘cause it’s no longer novel; it’s really a
novelty kind of detector, or I should rather say, “It’s not a … it represents the detection of novelty,”
rather than “It’s a detector.” Then, you kind of get used to the elephant, and the elephant’s no longer
new and interesting, and the brain gets very bored quickly with things that get repeated. But then, if all
of a sudden, I show up a … there’s a monkey, then boom, you’ll get another P300, and then the monkey
comes back, maybe another one, and then maybe the third time, you’re … the monkey’s no longer
interesting anymore. Then, if I go back to the elephant, now, the elephant’s new again, so you’ll get
another P300—that’s how it works. It’s a normal signal; it’s a … the reason I bring it up is people use
that as a trigger for various things for control of devices and … with EEG.
So I mentioned a lot of this noise; I won’t … probably don’t need to go over all of it again, but just to give
you some examples, on the left, that picture—those big deflection—that’s the eye again; when you
blink, you get this huge deflection, but other sources of noise can be just poor conductance—you don’t
have good gel contact between the electrode and the scalp—or electrical noise—cell phones nearby,
other electronics in the hospital—we’ll have IV pumps and things, and the electrical noise can be a
problem, ‘cause they’re—you know—these are very sensitive electrodes with high dynamic range.
Muscle, so I mention, like, the chewing muscle up here in the temporalis; that’s what’s going on there;
so all that red, very fast activity that’s just looks like big, red smear at the top of that, that’s all muscle.
So you can imagine trying to read underneath that is pretty darn difficult; it’s a big source of problem, so
people have to be kind of quiet, not using their muscles. Even just standing up with an EEG on, the
muscles that are keeping your head up, that are propping up—you know, the neck muscles just
propping up your head—they’ll create a lot of noise, so the best way to get a good reading it to have
someone laying down, and quiet, and not chewing, and not talking, and … and then, another issue is, of
course, that besides the fact we have all this noise, a lot of these signals we recognize as patterns, and
we have a general sense of what they represent, but we don’t know what a lot of the rhythms mean,
quite honestly.
So people have been trying to harness EEG for creating devices that control things for a long time—so
brain-computer interfaces. I really need to update that photo; that’s a, like, a 1982 monitor; and … but
that’s a stock photo. So the idea of a brain-computer interface is a direct communication between the
brain and a computer, and you might use it to restore or even augment cognitive or motor functions—
so restore in the sense of someone who’s got motor disability, augment in the sense of a normal person
who could control things without moving. And we been researching this for—you know—fifty years or
so—started at UCLA under a grant from DARPA—but it’s … BCI research has been going on for a very
long time now—almost fifty years—and even after fifty years, we’re kind of bumping up against the
limits of technology of EEG. And the best groups really achieve about seventy-five percent accuracy
with a lot of training, really good protocols. So this is … why does that happen? It’s because of all of the
things I was telling you about with the artifact, the noise, and then, to top it off, we … the specifi … lack
of specificity; we don’t—again—we don’t know what a lot of this means; there are very few signals that
we actually know the meaning of. Yeah?
>>: Just in general, I mean, if you were willing to be invasive and—you know—drill holes in people’s
skulls and stick electrodes wherever you wanted to, would almost all of these problems go away?
>> Thomas Deuel: A lot of ‘em do. We then have a whole ‘nother body of knowledge we need to
acquire with very few subjects, because I …
>>: Sure.
>>: … I worked with a … so I did a fellowship in surgical epilepsy at Harborview, and we … and one of the
reasons I went there was because they did a lot of surgical epilepsy. So we did a lot of this; one of the …
probably, the … one of the biggest centers in the world for …
>>: So in theory, if somebody was …
>> Thomas Deuel: And …
>>: … really, really wanted to do this …
>> Thomas Deuel: Jeff Ojemann, who’s the neurosurgeon I work with, also worked on brain-computer
interfaces intracranially, but the basic answer is yes. The … what makes it complicated is the number of
people that you—you can imagine—yeah, it’s very invasive; we’re … that’s not the purpose of why we’re
cut their skull open in the first place.
>>: Right.
>> Thomas Deuel: We’re trying to find out where their focus of epilepsy is so that we can take out the
smallest piece of brain possible. And then, we have to consent these people, so you can imagine the
ethical barriers: “Do you mind if I make some music out of this … these electrodes that are implanted in
your brain in surgery while—don’t worry, we’ll still take care of your epilepsy, but …” We can actually
ethically consent people, but the … it’s ju … access is very difficult. It’s one of the reasons I was doing
research there is because I had access to patients after—patients who will consent while they’re still
awake, and yeah.
>>: But who …
>> Thomas Deuel: But the an … the basic answer is yes; a lot … the noise goes away, but then we have …
we still have the specificity problem, like we have a …
>>: That’s more of, like, you get a bigger n, might be able to figure it out. So …
>> Thomas Deuel: Exactly, yeah.
>>: But that old William Gibson model from—I guess—the first novel with … where you actually plug in
with a jack. You have to still have full access to the brain, don’t you? You couldn’t just jack it in.
>> Thomas Deuel: Well, not necessarily, ‘cause you could … you don’t have … there … we—you know—
we generally are using triggers from a very specific part of the brain, so you don’t necessarily need the
whole access to the whole thing.
>>: Really?
>> Thomas Deuel: Yeah, and we will have … we have patients who have electrodes coming out of their
skull, and they’re able to walk around; they stay in the hospital while that happens, but they can walk
around, go through their room; they’re talking; they’re awake; and they have [laughs] they have
electrodes coming directly out of their brain.
>>: Wow.
>> Thomas Deuel: So yeah, believe me that would—you know—it would be a lot better that way, and
ethically, it’s actually … I’ll talk about it later, but I’m—you know—using this not only to generate music,
but also for cognitive rehab and for neurological rehab, and for that … for those people, that might be a
very ethical use of that technology.
So people have been doing … turning EEG into sound and music since the very beginning of EEG, really.
So 1929 was the earliest description—this guy, Hans Berger—and a few, like, weeks ago, I noticed that
there’s this strange piece of brain in this photo—is the only photo I have of Hans Berger; it’s a little
disturbing. But he was the first one to describe EEG, and then, 1934, these guys Adrian and Matthews
replicated his results, and while they were doing that, they tried: “Well, let’s try monitoring our own EEG
with sound. Maybe we can just hear things.” So very early on, people were doing this—trying to
convert it into sound—and artists have been trying to convert EEG into sound and music for art pieces
for a very long time, [white noise] and one of the earlier, famous ones was Alvain Lucier, the composer,
who had a piece called “Music for Solo Performer,” and he essentially was looking at the posterior
dominant rhythm—so that sort of resting rhythm that’s in the back of the brain when your eyes are
closed, but you’re not actively using your visual cortex. It turns on when you close your eyes, quite
simply. He was using that to drive percussion instruments—so timpani and various other instruments—
in this performance called “Music for Solo Performer.” He collaborated with John Cage, and it was a
great piece conceptually, but in practice, what really happened was they … he’s sitting up; he had a lot
of noise; he didn’t have good referential … it’s not very good signal, and John Cage was watching the
signal coming out and realizing in real time that it’s not working so well, so he … John Cage kind of
cheated and adjusted the levels. It made for a good performance, but it wasn’t scientifically very tight;
I’ll just put it that way. This is a really long video, but there’s the … part of the performance was the
preparation, and so I won’t go through the whole thing, but they’ll be some others I can show you.
People continued doing variations of this; this was … David Rosenboom did some pieces in the
seventies; this one called “Ecology of the Skin”—we don’t have a recording of this one; we have a
schematic, but … and a description, which I don’t even quite understand—but it was called “Ecology of
the Skin,” and he’s … they were … this is a direct quote: “Brain signals of multiple participants controlling
mixing of music played by keyboard performers, along with phosphene visuals.” I actually don’t know
what that means, but …
>>: Phosphenes are the … when you close your eyes, you get these, like, kind of symbols—right—if you
push hard on your eyes or if you [indiscernible]
>>: They’re [indiscernible] to perturbation.
>>: Yeah, like a natural geometric patterns that your eyes create.
>> Thomas Deuel: This was somehow phosphene visuals for the viewers, some … I … who knows? But
anyway, he did another piece later, called “On Being Invisible,” and this one was a little better described,
and we have recordings of it. So it was a self-organizing multimedia performance using event-related
potentials from two performers’ brains. So event-related potentials are things like the P300 that I’d
shown you earlier. It’s exactly what it sounds like; so it’s a potential; it’s a voltage that has … that you’ll
see with some particular event, but one of the more common ones that people use is the P300. So
there … the first one was more sound—just sound. Let’s see … should … maybe I will go this way to
make this happen. [music] That’s “On Being Invisible” one; in two, he used more human voice. [music]
>> video: Also the [indiscernible] described [indiscernible] each separate time. Its diminishes in its … it
has rallied the confusion [indiscernible] Thus emerged the national dilemma an contradiction, and for
him, a profoundly personal dilemma [indiscernible]
>> Thomas Deuel: Not sure why those aren’t playing under full-screen. And people have continued to
try to make music and sound from EEG. A big proponent of this—that’s done a lot of work—is a
composer at the University of Plymouth in the UK, Eduardo Miranda, who’s on most of these papers. So
he’s used some sort of … a lot of them are passive control, where you take a recording and then
transform it into music in some way. One he used that was a little more closer to control—although not
actually technically good control—is this using alpha and … versus beta rhythms; the beta is a little more
attentive, and the alpha would be a little more resting. Yeah?
>>: So I was curious: are these pieces replicable?
>> Thomas Deuel: It depends; I mean, the ones that are recorded, and then you do post-hoc analysis,
sure; you could replicate them. Some of them are live, real-time, and so they’re improvisational, so you
can’t quite, but you probably could go back and figure out how to do. “Music for Solo Performer,” that’s
replicable; I’ve … but I mean, of course, it wouldn’t be the exact same as the original performance, but …
does that answer your question? I’m not …
>>: So write some sort of … yeah, my question is just like: if, like, I have a second performer, so you
could keep … at least produce something similar as the first one.
>> Thomas Deuel: Meaning you play some passage of music, and then someone else tries to replicate
it?
>>: Right, I think, like, differentiate that from random noise.
>> Thomas Deuel: That would be theoretically possible; it would require really good control and … very
tight control and good … and a lot of skill on the performer.
>>: Right, because I think, like … different quest …
>> Thomas Deuel: In these settings, no; the answer’s no.
>>: Right, but different question is like: ultimately, so even if, like, the device just capture the noise …
>> Thomas Deuel: Yeah.
>>: … the noise could, like, make it generate music.
>> Thomas Deuel: Yeah.
>>: So how do you know whether, really, this is results of the noise or …?
>> Thomas Deuel: That’s a very good question, and easy for me to be critical of others, but that’s …
there’s big problems with noise in a lot of these papers here. They’re not … they haven’t shown that
they’re very specific control—that’s proven accuracy. So I’ll talk a little bit my device, but I’ve attempted
to do that—try to demonstrate … prove that it’s actually accurate, you know, actual control, real control.
These guys in the bottom of Stanford did a pretty interesting thing—again, this is not control, but it’s …
they took seizures and made it … did a transform that made it sound like human voices, and you can
kind of hear the whole evolution of the seizure go through; it’s quite impressive. So then, my device is
new for a few reasons; one is one of the things you brought up; but it’s a new musical instrument and a
cognitive therapy device. I did it in collaboration with my research collaborator in my post-doctoral
work, who’s Felix Darvas; he’s a physicist at UW, in the department of neurosurgery. I have a patent
pending on it; its rationale is that: yeah, we can only get seventy-five percent control over, say, a robot
arm—which is not good enough; if you’re paralyzed, and I hook you up to a robot arm with your brain,
and you only get seventy-five percent control, you can see why that would be not good enough; and we
haven’t really gotten much higher than that—that’s not good enough for a robot arm, but it might be
good enough for music, as long as it doesn’t sound bad, and that twenty-five percent of the time, it’s not
horrible. As long as you get a sense that you have some kind of control, even if it’s not perfect, it could
be good enough. So the idea is: music is created with cognitively inducible signals, meaning conscious
control, without moving. The signals themselves, people often ask, “Are you thinking about music?”
They’re not musical in origin; I’m using motor and visual cortex to trigger the signal, so you’re activating
and deactivating motor cortex and visual cortex. We’re not good enough yet—I did research on pa …
you know, on how we process music, cognitive processing of music—we don’t know the signals well
enough; I can’t just look at an EEG of your brain and say, “You’re thinking about E sharp”—you know—
“you’re thinking about B flat,” and there’s … we’re not even close to that. So that’s not the signal we’re
using, but in the future, with better research, we may be able to get to that point, where you’re actually
thinking about music and then making music, but this is really a transform between thinking about
something else and turning it into music.
So my goals were initially just creative; they were just to create a novel musical instrument that would
be interesting and a new performance, and composition, and soloing device, but naturally, since I work
with patients with motor disability—with stroke patients, patients with ALS, spinal cord injury—I think
of: how could I use this to apply to help people with motor disability? So my goals are really in two
directions: one is creative and musical, and the other is neuro-rehabilitative is the way I’d put it. So we
took our device, and we have run some experiments, which we’re on the verge of submitting. And
essentially, what we did was: we wanted to just prove very simply that we have accuracy with this
device; it’s not just … I could attach a cap to your head and have music come out, and I could convince
you that it’s working, but it would be very hard for a novice or someone else to say, “Oh, I”—you know,
that’s working or it’s not.
So what we do is we take the signal power from either visual or motor cortex—we run two different
paradigms—we convert the signal power from that part of the brain into a musical scale, and then, what
happens is the … once we’ve done this calibration, and we’ve converted into the musical scale—the
calibration takes about five minutes. So we calibrate for each individual for each session—so even me,
I’ll redo it every single time—so that it’s specific to that person’s brain, that … the position of the
electrodes, everything. Then you’re generating notes. Right now, it’s generating notes on a scale by
activating or deactivating the motor and visual cortex, so you go through this training, and then—and
the training’s very brief, about five minutes—and then we test the accuracy; what we do is we give
people a target note, and we say … they give ‘em ten seconds to hit that note; they actually have to hit it
three times in a row—we didn’t want ‘em to hit it once, ‘cause they hit it once, it could be random—but
you have to hit three times in a row, and then you score a hit; if you … ten seconds go by, and you
haven’t scored a hit, it plays a tri-tone, like a bah, and you get another note; and you get to try again;
and you try to score as many hits as you can within a five-minute trial period. So we just see how many
time people can hit, and the goal was simply to see if we could do better than random.
So our initial results—this is from thirteen people; we’re now up to seventeen or so, but it’s similar; it’s
actually a little better now—but random is down there on the right, and the average is—every single
person did better than random; these are novices, mind you, they haven’t been trained on this at all—
the average was much higher than random; percentage-wise, it’s not amazing, but we’re up in the
seventy percent or so. We have the two conditions; we have the visual cortex on the top and the motor
cortex on the bottom; motor cortex is a little more difficult, but a little more interesting, because it
doesn’t require any move … you can do it with no movement at all; the visual cortex, you still need to
open and close your eyes, which is movement in one way, even if we’re not measuring that actual eye
movement. But people did better than average, and they did significantly better than average, and
that’s basically all we were trying to prove here. We are doing experiments where we’re going to train
them and see how much better they can get with a good feedback loop and a good training paradigm,
and my theory is that, by using music—and specifically, by using music and not using, say, a cursor on a
screen that you’re trying to just move left or right—that that paradigm of music is powerful enough in
the human development and in our cognitive space that I think it’s gonna be a more powerful feedback
loop, and I think we’re actually gonna be able to do better than has been done in the past—that
seventy-five percent with just moving a cursor on a screen—because the feedback loop is music
specifically. I may be wrong, but we’ll see. [laughs] We also did a skew bias to try to make sure that
people weren’t all just generating one note, and then they would always hit that one note just by
chance, and some people were skewed from one side to the other, but overall, the skew bias was pretty
minimal, and it was … we also did a separate analysis to say, “Okay, if you’re always hitting this one
note, what would be the chances of scoring as you did?” And everyone did better; there was one or two
that was very skewed, but for the most part, they weren’t.
And then, performance-wise, I’ve performed this once in a audio art festival down in San Francisco,
called the Megapolis Audio Art Festival, and we had a band, and we … I was playing the
encephalophone, and we had three other instruments: we had a drums; we had bass; and we had a
guitar; and played four songs, and then gave a brief talk, that was similar to this, but like, a lot more
simple, to describe what was going on, and then we just played four songs where I improvised. [music]
And I have a crazy getup; art festival, so I had a [indiscernible] wig on top of the …
>> video: A little sound check.
>> Thomas Deuel: So the … at the beginning …
>> video: The … Nick? TSA. Had a checkpoint, yeah. High school [indiscernible]
>> Thomas Deuel: We still have to go through the calibration every time, like I said. [music] The band
decided to just kind of [indiscernible] while I’m doing the calibration.
>> video: On. On.
>> Thomas Deuel: They’re saying, “On, off;” that’s the cued part of the calibration, and then, once it’s
calibrated, we start playing; I’m the vibraphone-type sound. And the way that it’s set up right now,
there’s just continuous notes coming out; they’re just … there’s no stopping it, so for a performance
standpoint, we decided to use a volume pedal, so that’s why my left leg’s up there like that—to allow
some phrasing, ‘cause just continual notes, it’s not very interesting musically; you want to add some
rests, some pauses. But I’m the vibraphone; I’m controlling the pitch, yep, yep. Here, it’s restricted to
six notes; it’s a pentatonic scale …
>> video: Welcome to our little show here. So we’re …
>> Thomas Deuel: … within one octave.
>> video: Doctor Gyrus and the Electric Sulci, and I know you’re wondering what the hell is going on.
[laughter]
>> Thomas Deuel: And then I do brief …
>> video: So I’m gonna try to explain it to you now.
>> Thomas Deuel: …discussion that’s similar to this talk.
>> video: So I’m gonna talk to you a little bit about this invention I made …
>> Thomas Deuel: And I’ll pull … there’s a couple other songs. So we’re able to …
>> video: … which is called the encephalophone, and it’s a musical instrument that you …
>> Thomas Deuel: … switch—you know—I switched keys and instruments pretty easily. This is basically
a MIDI instrument, so …
>> video: drive by thought, using EEG, or electroencephalogram.
>> Thomas Deuel: … you could have any voice you want—could be human voice or could be …
>> video: So it’s novel musical instrument; it generates notes from conscious control without
movement.
>> Thomas Deuel: And since we have six pitches, I can switch the key as well; there are major key and
minor key—what have you.
>> video: I was moving there, but I’ll explain what that was about. It uses EEG—
electroencephalogram—and it uses intentional control; it’s not just passively generating music.
[indiscernible]
>>: Without moving your right hand, your left hand—that sort of …
>> Thomas Deuel: There, so the … yeah, so there’s the motor cortex, and it’s thinking about movement,
and you have to specifically think about one side.
>> video: So the user basically controls the music with their thoughts. So EEG is …
>> Thomas Deuel: It’s lateralized. So in this … and then there’s visual [music] … there’s the visual
cortex; I use the visual cortex in this one. So this is just a piano sound, obviously. And now, we’re in a
major key—in a minor key before. There’ll be one more, I think.
>>: Difficult?
>> Thomas Deuel: So when it’s … when the visual cortex is activated, that alpha frequency goes away—
so when you open your eyes; it goes away—when you close your eyes, it goes up. So the relative power
of that, so opening and closing your eyes is the basic idea, but you can … so the top and the bottom are
very easy, meaning the top of the scale or bottom of the scale are related to the power, so you can hit
the top and the bottom quite easily. Going in between is more difficult, but I’m learning more and more
how to control the middle; so I can dip into the middle and come back out by partially closing and
coming back out, yeah.
>>: I think you’re saying [indiscernible] just imagining moving and not moving is a similar event.
>> Thomas Deuel: Right, exactly. And that’s definitely more difficult in the middle …
>>: So that you really have to [indiscernible] yeah.
>> Thomas Deuel: … but it’s … I’m … it’s like any other musical instrument: when you’re new, you’re a
bit sloppy, and you get better with training, and I think the—like I said—the motivation of having music
playing live is powerful enough that people will learn quite well, I think. So in this one, it’s just a
synthesized sound, and it’s in a minor key. In any case, think you get the picture. [laughs] So I …
whoops, I didn’t mean to come out of the full-screen mode. So then—you know—where am I going
with this? I don’t know if anyone recognizes that film, but …
>>: Brazil?
>> Thomas Deuel: No.
>>: City of Lost Children.
>>: City of Lost …
>>: City … I know it’s city.
>> Thomas Deuel: [laughs] There they are doing a … think he was taking the souls out of kids, or he was
taking their minds—it’s like poor children. We’re not planning that, but the future directions are
musical, so within—you know—within the musical paradigm, there are lots of ways to go with that …
this. One is chords, for example—switching between chords, not just notes—switching between
timbres—so you could switch … have a preset melody, and then you’re switching instruments—you
could trigger whole passages, but diagnostically, it could also be used for listening to EEG. In the
hospital, we have heart monitors, and people are very attuned to changes in sound that let you that the
heart’s going into arrhythmia. EEG’s a little more complicated, ‘cause you have twenty channels going
at once, but you could learn to recognize certain patterns that might be useful so you’re not having to
stare at the EEG continuously if you’re doing continuous EEG. So that’s more diagnostic; and then
therapeutically, motor import … impaired patients—patients with spinal cord injury, brainstem stroke,
or ALS—they have motor disability, but their motor cortex is still intact for the most part, and that part
of their brain is not being used much, or it’s very impaired in its ability to effectuate music …
movement—funny I say that. So I can then go, and one, just hook them up to this device, and have
them be able to express themselves using part of the brain that they don’t get to use much anymore,
because they’re motor-impaired, and two, I’m willing to believe that they’ll be some cognitive rehab
that may actually help flip back and help them with their motor disability. Now, if a person has a
amputation, they’re not getting movement back out of that arm, but patients with brainstem stroke
essentially have an intact arm—they can’t move it—they have an intact motor cortex; they’re just
missing a portion in between—so like the wire’s cut in the middle—and I could use this to bypass.
Obviously, I’m creating music; I’m not creating movement, but we could work a paradigm where we’re
improving the use of that part of the brain—the cortex—in order to start getting them to move more.
So I’m applying for grants for clinical trials to see what this does.
>>: Would you consider it ethical—you know, assuming the person’s fully cognizant of the risks—to go
and do surgery on their more … motor cortex so you get better signals to control their arm? ‘Cause I
would be: yeah, I’d—you know—I could use my arm; I would totally sign up for that.
>> Thomas Deuel: Yes, yeah, absolutely. Yes, it is—I mean, obviously—like … it’s exactly like you said:
they need to be awake and competent and—you know—but a lot of these patients are fully cognizant.
You know the classic would be a locked-in patient; they have a really bad brainstem stroke, and they
can’t move at all; they’re … cognitively, they’re completely there; they have difficulty communicating,
but those type of people—yeah—they would sign up for that. Again, the research isn’t great, so we
don’t have a whole lot of data as to how this would well invasively. Ironically enough, the—meaning the
intracranially—we don’t know very well how … what we’d be able to do, but yes. And there are trials;
it’s just the numbers are so tiny that we don’t have good control yet with that, and … but that is being
actively pursued in general, yeah.
>>: So you’re basically … I mean, your instrument … you’re mapping one thought pattern onto music.
>> Thomas Deuel: Mmhmm.
>>: Obviously, other people are mapping it onto cursor control.
>> Thomas Deuel: Right.
>>: People are mapping onto whatever.
>> Thomas Deuel: Mmhmm.
>>: So one claim is that because—you know, probably evolutionary-wise—sort of the music things, we
probably have more stuff going on around music than we do around moving cursors.
>> Thomas Deuel: Yeah.
>>: We don’t have a whole lot of evolution on that.
>> Thomas Deuel: Exactly, and that’s my theory as to why we’ll do better.
>>: And so that’s … but is there any other … is that really the key difference or is there … then that
raises other questions: what other …
>> Thomas Deuel: What other modalities do you use that are as powerful?
>>: Yeah.
>> Thomas Deuel: Yeah, well, tying it into emotion is helpful, and that limbic valence that music has—
which … meaning it’s very emotional—is a motivator, so if other … other paradigms using non-musical
things that are very emotional might work well.
>>: [indiscernible] some …
>> Thomas Deuel: [indiscernible] people.
>>: In the [indiscernible] examples and the … I mean, that’s pretty amazing timing, you know? The
timing was very good.
>> Thomas Deuel: Yeah, and I was … I’m able to …
>>: Yeah—you know—you’re able to do pretty good with six outputs, I guess.
>> Thomas Deuel: Yeah.
>>: So I’m wondering: where does that … where do you feel like that starts to top off? I mean, can
[indiscernible] six up to ten?
>> Thomas Deuel: Right now, I’m … six is about it. I started off with a … it’s a good question; a lot of
people want a—you know—how granular can you get? Can you go a hundred units now? No way. Zero
and one—binary—is piece of cake; I can hit that. I started off with eight … or nine, ‘cause it was a … oh,
eight, ‘cause it was a major scale with all the notes, and I’ve got better with six—just a little more. So
yeah, I can’t tell you precisely, but it feels like it’s pretty good around six, and that’s sort of maxing out—
six degrees of freedom or …
>>: Could you … I don’t know what all the variables are, but what was it, like twenty electrodes?
>> Thomas Deuel: Yeah.
>>: And if you go up to hundred electrodes, does that help?
>> Thomas Deuel: Doesn’t really help, ‘cause I’m only using a small part of the … I don’t even need most
the electrodes to do this.
>>: So there’s no real obvious way to go from six to twenty?
>> Thomas Deuel: Cut the skull open; [laughter] get in there. Yeah …
>>: But couldn’t you combine …
>> Thomas Deuel: With training, maybe.
>>: If you combine the visual and motor cortex, so if it was like: eyes are closing here, and then …
>> Thomas Deuel: Yeah.
>>: … and then motor cortex and kind of imagine … couldn’t you just …
>> Thomas Deuel: I’m trying that; it’s a little bit like patting your head and rubbing your stomach; it’s
just …
>>: Oh, really?
>> Thomas Deuel: It’s kind of—yeah—you’re kind of doing two things at once.
>>: Oh, that’s interesting.
>> Thomas Deuel: But yeah, it would be nice to have a non-motor control over phrasing—over the on
and off—because as you saw, right now, it just … it’s just continually producing music … notes. I’m—you
know—at whatever, two a second, and that’s not very musical so I use a pedal, but it would be nice to
use a second control to pause or not.
>>: What about the thing where they use infrared light going into the brain to look at different activity
in the brain. That might get past some of the motor stuff, anyway.
>> Thomas Deuel: It could, and there are other modalities: transcranial magnetic stimulation or MEG—
magnetoencephalogram—is … magnetoencephalogram uses magnetic field to measure, indirectly, the
electrical field. It’s got much more spatial resolution—so much better—but it’s a gigantic device that
there’s only a few of them in the world, and they’re massive, and it requires super-cooling, and so it’s a
technical limitation. EEG’s practical; it’s portable; I can put this … I can take the whole thing anywhere.
>>: [indiscernible] isn’t so bad either, though, right?
>> Thomas Deuel: What was that?
>>: Said the infrared lights …
>> Thomas Deuel: Infrared? Not …
>>: … that’s not so hard to do, right?
>> Thomas Deuel: Not too bad, it’s a little more invasive in the … not the infrared itself, but from what I
understand, the preparation is a little more … but I’m not sure. It could possibly; there’s … another one
would be just a DC direct stimulation you can use to … direct stimulation from DC and turn on and off …
>>: [indiscernible]
>> Thomas Deuel: Yeah.
>>: That’s a different thing, yeah.
>> Thomas Deuel: Yeah. Another one would be FMRI; that’s real-time, good control; spatial resolution’s
good; temporal resolution’s not so great. I don’t know about the infrared; honestly, I’d have to look into
it more. Sorry, [indiscernible]
>>: So everyone’s really curious about, like, how many different input points are there.
>> Thomas Deuel: Right.
>>: Can you … when you were talking about, like, there’s motor, but then there’s also visual; that
suggests you have two bands.
>> Thomas Deuel: Mmhmm.
>>: I’m seeing that you’re doing a mapping from input to pitch …
>> Thomas Deuel: Mmhmm, yeah.
>>: … and you settled on a major and minor scale.
>> Thomas Deuel: Yeah.
>>: So I’m wondering: do you guys have plans to extend that, one, to other pitch systems …
>> Thomas Deuel: Mmhmm.
>>: … that use five or six notes?
>> Thomas Deuel: Yeah.
>>: So this could be portable to other cultures …
>> Thomas Deuel: Yeah.
>>: … that have different musical scale systems. And then the other things is: are you interested in
doing things that are not just pitch?
>> Thomas Deuel: Mmhmm.
>>: Which also would have the same effect, because pitch and harmony, I think, are biases of, like, the
Western classical tradition, but a lot of other people have focused on other elements of music, like
texture or rhythmic structures and things like that.
>> Thomas Deuel: Yeah. Yeah, so I’ve mentioned we’re—you know—one would be chords, another
would be timbre—just switching instruments or passages—and definitely thought about other scale
systems; I was looking into some Indian—what do you—raga type of scales and then other ones. One
thing about … a pentatonic is not bad there—you know—there’re some that are culturally similar—like
the fifth is … tonics and fifths are in every culture, and pentatonic is seen in almost every culture. But
yeah, that … there’s lots of applications that way. This is first iteration; I want to see this—you know—I
want to see this work in a way that I know, and yeah, I’m culturally biased from Western musical
tradition that I was educated in. And then in terms of two sort of inputs, I’m generally not doing two at
once; I’m doing one or the other; I’m not using the visual and motor cortex at the same time in general;
we’re experimenting with it, but I—again—I want to get one modality going really well, prove that it
works well, see how high we can get the percentage accuracy, and then—you know—in parallel, try
these other things.
>>: Sorry, so are your inputs analogue or digital?
>> Thomas Deuel: Analogue to digital—I mean, they’re … that box I have, I’m sit … so you got analogue
inputs; it’s just current, right? It gets … it goes into a … this, the EG—as if you know—has its nineteen
outputs coming out—just voltage—that’s analogue. That head box is an amplifier and a A/D converter,
so that’s digital.
>>: So I guess I’m …
>> Thomas Deuel: And digital into a …
>>: … took a bad way of asking my question. I’m sorry I’m asking so many consecutive questions. Are
you looking at a specific direction and that’s coded to a specific note, or do you have to be—like on a
violin—you have to be very precise with your intonation and the location so that you get the exact pitch
that you want? As opposed to, say, a fretted instrument, like a guitar, where they’re fixed by the
location of the frets.
>> Thomas Deuel: So I’m just looking at the power of a given amount of rhythm at a certain electrode,
sort of buffered over about one second … or five hundred milliseconds.
>>: So the metric works out a fretting structure for … ‘cause it fixes where the sounds are gonna be.
>> Thomas Deuel: Yeah, yeah. They’re [indiscernible] restricted.
>>: Yeah.
>> Thomas Deuel: Right, it’s not a continuous scale; it’s restricted to units; and those units are: I
calibrate your range for … I have you go through your own personal range, and then I set bins, basically;
and I have eight bins, and if you get within this power—you know, from zero to two hundred—you get
one; from two hundred to four hundred, you get two, and that’s it. So you know, you can imagine if
you’re near the edge of one of those bins, it’s not so accurate, right?
>>: ‘Kay.
>> John Boylan: We’re gonna get maybe one or two more questions, and then—you know—maybe a
little bit of time to talk after.
>> Thomas Deuel: Sure, yeah.
>> John Boylan: Do you want to go with your …
>> Thomas Deuel: You.
>>: It was just a comment …
>>: Ah, you guys …
>>: Did you want to go best, hon? Alright.
>>: It was just a comment on the fNIRs, ‘cause we work with fNIRs in our lab, and I was just gonna say:
it’s easy to put on, but it’s hard with his hair, so prefrontal cortex is much easier to measure.
>> Thomas Deuel: Well, we can get through hair; I mean ….
>>: It’s more comfortable.
>> Thomas Deuel: Yeah, I mean, I’ve used people that have lots of hair, and there’s a limit, but you can
get through—like your hair would be easy. I’m … I might be collaborating with Reggie Watts, and his
hair’s crazy; [laughter] I’ll be like, “Reggie, I don’t know.” [laughs]
>>: Sorry, I meant within the infrared [indiscernible]
>> Thomas Deuel: Oh, oh, right. Yeah, yeah, okay.
>>: So you don’t put that in [indiscernible]
>> Thomas Deuel: I thought you meant hair.
>>: [laughs] No, I didn’t.
>> Thomas Deuel: Shaved head’s great at both.
>>: Yes, [laughs] but fNIR’s …
>> Thomas Deuel: Hair’s not … a little bit of hair’s not gonna do well.
>>: Please don’t shave Reggie Watts’ hair. [laughter]
>> Thomas Deuel: What’s that?
>>: Don’t shave Reggie Watts, please.
>> Thomas Deuel: I … up to him. [laughter]
>>: So my question is: you’re using a pentatonic scale, and I’m not a musical expert, but I think
pentatonics are pretty forgiving.
>> Thomas Deuel: Yeah.
>>: If you were to insert, instead of one of the six, you insert something that sounded really horrible …
>> Thomas Deuel: Yep.
>>: … are you able to avoid hitting that note when you’re controlling it?
>> Thomas Deuel: I’m … more and more, but—you know—I’m cheating a little by a pentatonic scale; it
sounds good all the time, you know?
>>: Sure, [indiscernible]
>> Thomas Deuel: I realize that, and since the accuracy’s not great in the middle—it’s okay; it’s getting
better—I avoid … the pentatonic avoids the fourth, which sounds terrible against the third, for example,
and the seventh can sound really bad against the eigh … the tonic. So I did cheat, and the control’s not
great to avoid a bad note. If … as we get better, I may flip it back and see if we can—you know—if, as
the control gets better—you know—if you show … we can kind of prove scientifically that we have
accuracy that’s higher, might try that, ‘cause it would certainly bore … be more interesting—little more
dangerous, which makes it interesting. [laughs] Yeah?
>>: So ins … would you be able to insert some timing control with your eyes? Then, like I think you were
sort of saying, Lucier was doing that a while ago.
>> Thomas Deuel: Yeah, he wasn’t doing timing per se; he was just doing eyes open, eyes closed for
control—the visual cortex—which is pretty much what I was doing there. You could do eye control and
then do motor cortex for the scale, and then use your eyes for control.
>>: I was actually thinking: is it possible to do that? Just … you would still have to use the mortar cortex
…
>> Thomas Deuel: Mmhmm.
>>: … for the tone; you couldn’t do it all with the visual cortex and then use your blink per …
>> Thomas Deuel: Yeah, that’s right; that would clash.
>>: No? Okay.
>> Thomas Deuel: Yeah, that would clash. We are really limited in what we’re able to use, as you can
see; there’s a reason; it’s not just … people have this concept of EEG, like you just read your thoughts,
and then one can just translate that into a robot that does any—you know—it’s quite … it’s … what I
tried to say with the talk is describe all the difficulties, not to shoot it down, ‘cause I obviously love EEG,
find it really interesting, but it’s very messy signal, and it’s really difficult to weed out what’s not even
brain signal, and it’s also difficult weed out what we know is functional and what’s not. So …
>>: Are you seeing much cross-talk between the motor cortex and using the volume pedal?
>> Thomas Deuel: No, and I—I mean—I specifically put it on my left foot. If it were my hands, it might
get a little … you get a little bitters … bigger signal on the motor with your hands than you do with your
feet, and I put it on the left and not on the right for a reason. If I do use it with motor cortex and the
foot pedal, it will affect things more in the motor cortex paradigm than it will with the visual cortex. The
visual cortex, doing that shouldn’t really be a problem; you don’t want to move much; if I move my
upper body, even, much, you get messy signal. so I got to really detach that foot [laughs] to keep it
away. But yeah, it’s very sensitive, so …
>> John Boylan: [indiscernible] okay, so …
>>: Got answered, got answered.
>> John Boylan: I think that’s about it; I really thank you gentlemen.
>> Thomas Deuel: Thank you very much. [applause] Thanks for having me. I’ll …
Download