>> Mark Thomas: Good morning everyone, both of you... joining us online. It's my great pleasure to welcome...

advertisement
>> Mark Thomas: Good morning everyone, both of you who are here and those of us that are
joining us online. It's my great pleasure to welcome to Microsoft Research Shima Abadi. Shima
completed her PhD in 2013 at the University of Michigan in the field of blind deconvolution and
now she works jointly, partially as a research scientist at Columbia University and partially at
the University of Washington in the School of Oceanography. She will be presenting today her
talk entitled Blind Deconvolution Using Unconventional Beamforming. Without further ado,
Shima, you have the floor.
>> Shima Abadi: Thank you so much, Mark, for the introduction. Thank you for coming to my
talk. Today I'm going to talk about blind deconvolution using unconventional beamforming. In
my PhD I worked on array signal processing, specifically on blind deconvolution and I developed
a new beamforming technique. I applied these two techniques to underwater communication,
marine mammal localization and biomedical imaging. I also believe that these techniques can
be used in audio processing in devices, for example, Kinect. I will start my talk with an
introduction to acoustic signal processing tasks, blind deconvolution and applications, and then
I'll talk about synthetic time reversal, which is a blind deconvolution technique. Then I'll talk
about the mathematical formulation of synthetic time reversal, plane wave beamforming and
my new beamforming technique, frequency difference beamforming. Then I will show you
some results and discuss conclusions and future work at the end. Acoustic signal processing
has three main tasks. The first one is signal detection. By signal detection I mean finding the
specific signal among a lot of sound sources. A good example of signal detection happened
recently. It was the search for the black box of Flight 370 which was lost in the ocean. There
are so many sound sources underwater. We have ship noise, seismic activity like earthquakes.
We have sound from schools of fish and from marine mammals, but among all these sound
sources we wanted to find a black box. We were looking for a specific frequency and
bandwidth that comes from the black box. The second task is localization and tracking, which
has two subcategories. If we want to know the direction of acoustic energy, you need to
beamform the signal. Beamforming is a powerful signal processing technique for spatial
filtering. We also can use localization techniques to find the location in two-dimensional or
three-dimensional. A good example of localization is echolocation. For example, bats use
ultrasound signals, record the reflected signal and by localization technique they navigate and
find their prey. In animal biology we can use localization technique to study the migration
pattern of marine mammals. Or in electronic devices, like Kinect we use beamforming
techniques to find the ideal angle. The third task is identification. Identification means
recovering the actual source signal. For example, in room acoustics when we say hello, at the
receiver location we might hear a distorted signal. By identification techniques we want to
recover the actual source signal from the distorted signal. Blind deconvolution is an
identification technique. We use blind deconvolution for recovering the source signal.
Identification is sometimes called echo cancellation too. In blind deconvolution we have a
sound source which is unknown. It broadcasts a signal and it propagates through an unknown
environment and the received signal by a single receiver or an array of receivers. Blind
deconvolution uses the received signal to go back and recover the source signal. Synthetic time
reversal is a blind deconvolution technique. We use synthetic time reversal for recovering the
source signal, but before talking about synthetic time reversal, I want to talk about time
reversal itself. Time reversal is a technique for focusing waves and it's based on the feature of
propagation called reciprocity. Let's say we send a delta function at point A. What we get at
point B is going to be the direct path and all the reflections from the wall and hard surfaces in
the room. If we reverse this signal in time and broadcast it from points B what we get at point
A is the delta function. This is the main idea of time reversal. Synthetic time reversal gets the
main idea but we do not need to be able to broadcast from points B. The broadcasting part is
done synthetically. The advantage of time reversal technique is we do not need to know any
information about the environment as long as it's not changing. Let me talk about synthetic
time reversal. In synthetic time reversal we need to know the location of receivers and we
need to have the received signals. These are the inputs. And we can recover the source signal
and all of the impulse responses. As I told you, synthetic time reversal is a fully passive
technique. We do not need to broadcast. We just listen. It's very efficient. There are no
searches or optimizations were iterations. Now let me talk about the mathematical
formulation of synthetic time reversal. Before, I want to define some notations here. I show
source signal by s, transfer function by g, received signal by p and j is the receiver index from 1
to n. n is the number of receivers that they have. As you know the received signal is a
convolution of transfer function and source signal. In frequency domain we have a simple
multiplication like that. We have the received signal. We want to find the source signal when
the transfer function is unknown, so this is the main problem. If you write the received signal
like this, which has been divided by the summation of the magnitude of the square of all of the
receiver signal, the magnitude of source will be canceled from top and bottom. What we have
here is an estimate of transfer function plus an extra phase which is the phase of the source
signal. We can measure this part, so we have the left-hand side. From the right-hand side, if
we can remove the source phase, we will have an estimate of the transfer function. We need a
phase correction. The phase correction shown here, w is the wave function. p is the received
signal. We multiply these two together and we sum over the number of receivers. Then we
take the phase of the summation. If we choose the correct wave function, this phase is going to
be source phase plus a linear function in frequency. Now, so the critical point is how to choose
the wave function. In synthetic time reversal, the wave function is the wave function of plane
wave beamforming forming. Here in this formula you are familiar with that. d is the distance
between each two receivers. c is the speed of sound and theta sub L is the arrival angle. If you
choose this w and put it here you will get this as the output for alpha and if we multiply eta 2
minus i alpha from the normalized received signal we will have an estimate of green [phonetic]
function plus an extra phase which is linear in frequency. Now if we take the inverse Fourier
transform it will be an estimate of transfer function with a time shift b which is the travel time
along the path that we chose from the beamforming [indiscernible]. Now that we have the
transfer function we can use that propagation or inverse filtering to recover the source signal.
Again, when we take the inverse Fourier transform, the source waveform is going to have a
time shift b. Now we know how synthetic time reversal works. Let me show you a simple
simulation, simple experimental result. In this experiment I have a source here and I send a
very short signal at 2 kHz. Down here I have an array of receivers, eight microphones. This is a
sample received signal. You can see all the echoes after the direct path. The correlation
between this signal and the actual source signal is only 59 percent. Now I use synthetic time
reversal. I get this signal which has 95 percent correlation with the actual source signal. I was
able to remove all the echoes and get back to the source signal. But in this case beamforming
was working and I was able to use the beamforming algorithm for the phase correction. How
about the cases that beamforming does not work? In the next couple of slides I'm going to talk
about beamforming. Let me briefly talk about plane wave beamforming. You all know about
that. This is a diagram of plane wave beamforming. We have n receivers, so we have n
received signal. We take a Fourier transform of these signals and we get p as a function of
frequency and we multiply each signal by an appropriate time shift tau which is exactly like
what we had in the previous slide. When we multiply the time shift at the end, we sum them
up. We take the magnitude square and it's going to be our beamforming output. Here in plane
wave beamforming, beamforming output is a linear function of the signal. We just have p here.
Now let me show you a simple simulation result when we have a free space, just one source
and n receivers, no boundaries. We have 16 receivers and the frequency that we send is very
broadband from 20 Hz up to 20 kHz, which is almost the same as the human hearing range.
Now we beamform that. It's going to be like this. This is the low diagram and it usually has
theta which is a steering angle on the y-axis from +90 to -90. It's usually versus frequency, but
here I'm showing versus d over lambda. D is the distance between each two receivers and a
lambda is a wavelength. We know that from signal processing side lobes will be present when
d over lambda is greater than half. When we are down here let's look at the marginal case and
d over lambda is equal to half. You get a nice peak in the middle, so the results are good. We
are getting what we want, but what if we go to a higher d over lambda, like 1.5 kHz? We get a
nice peak in the middle which is just coming from the source, but we get a lot of side lobes.
Now if we go to it even higher frequencies, a higher d over lambda ratio, like here which the d
over lambda is 40 which means the area is too sparse, the beamforming output is too noisy and
it's confused. We can't resolve the angle. The problem is here. But why is it not working here?
We assume that all the receivers are receiving plane waves, however, when d over lambda is
very large, which means d is very much greater than lambda, the wavelength, then we are not
receiving plane wave anymore. The receivers are feeling the curvature of wavelength. That's
why the plane wave beamforming is not working. Now I want to add two reflections to the
simulation. One reflection from a positive angle 2.6° and one from negative angle -2.4° and we
have the same receiving array. For resolving these angles we need at least one resolution in
beamforming. Based on this calculation, which is the resolution, we need to manufacture
somehow 1.5 kHz information to have at least one degree resolution. Let's look at the
lofargram of plane wave beamforming. It has the same setting as the previous slide. If we look
at the d over lambda equal half we get a nice fat beam in the middle. It's not able to resolve
three angles because we don't have enough resolution at this frequency. For having higher
resolution we need to go to higher frequency. Let's go to 1.5 kHz that we expect to be able to
resolve the angles. You can see in the middle we have three peaks at right angles, but we also
get side lobes. Now if we go to very, very large d over lambda, again, a plane wave
beamforming is too noisy and featureless. What if we only have the information here in very
large d over lambdas? How can we resolve the arrival angles from beamforming output? What
should we do? Here even the average beamforming is not working at all. If you have a
broadband if you take the average over frequencies, it's not helping. Now I want to talk about
the new beamforming technique for resolving this problem. The idea is very simple. P which
was the received signal, if I have that at omega 2, one frequency in the bandwidth, in the phase
I will have -i omega 2 times T. Now if I take P at another omega, omega 1, but this time
complex conjugate, in the phase I will have plus i omega 1 times T. If I multiply these two
together, in the phase I will get the frequency difference times time. Remember that omega 1
and omega 2 are in the bandwidth so they are very high frequencies, but the difference can be
small. Now, instead of beamforming P if I be informed this product at omega 2 minus omega 1,
I'm manufacturing the lower frequency information. This, the new beamforming technique is
called frequency different beamforming. Now let's look at the diagram of this technique. You
had seen this before for plane wave beamforming. I just want to show you the different part
which is inside the red box. Instead of beamforming P, I'm beamforming the product, P times P
conjugate at two different frequencies. And instead of beamforming at omega, I'm
beamforming at delta omega. Everything else is the same as plane wave beamforming. I sum
them up and take the magnitude is square. That is going to be the beamforming output. But
this time beamforming is a quadratic function of received signal, not linear anymore. Now let's
look at the beamforming output for the three path simulations that I had before. Remember
that we can play with omega 1 and omega 2 to get a different delta F, but the minimum delta F
that I can get is a function of the sampling wave and the size of FFT. In this case the minimum is
12.2 Hz. This is the lofargram of frequency difference beamforming at 12.2 Hz. We get a fat
beam in the middle. We don't have enough resolution to resolve three angles. Exactly like
plane wave beamforming, if you want to get better resolution I need to go to higher frequency,
so let's go to 48.8 Hz. You see that the beam in the middle is getting narrower but it's still not
enough resolution, so let's go to higher. The beam in the middle is getting even narrower, but
we are getting the side lobes. These are the cross lines related to the side lobes. Remember
that P has three terms for each arrival. When we multiply P times P conjugate we get nine
terms, but we only want three terms. The three terms that we want are in the middle. The
other six terms are the cross lines here. It's still not enough resolution. Let's go to higher
frequency. We can see the separation of path in the middle and more side lobes. But let's go
to 1.5 kHz that we expect to get enough resolution for resolving three angles. This is the
lofargram at 1.5 kHz. It doesn't seem to be working, but I want to do some magic here. Let's
rotate this figure by 90 degrees. You see the three paths here. We can keep the persistent part
at the right angles by taking the average over frequency. Now let's look at the output. This is
the average frequency difference in plane wave beamforming output. The dashed line is for
plane wave output which is featureless. The solid line is for frequency difference beamforming.
We get three peaks at right angles. Now I want to show you the average beamforming output
for different delta f. Delta is increasing from bottom to the top. I want to show you similarities
between frequency difference beamforming and plane wave beamforming. When you are in
low frequency we don't have enough resolution to resolve three angles. But as we go to higher
frequencies we are getting more resolutions and you can see that. It's very similar to plane
wave beamforming. The other thing that I want to show you is the side lobes. For example, at
20 degrees we won't get side lobes that this frequency if we do the calculation at 1170 Hz.
Look at the 20° beamforming output. Here we don't have any side lobes. We don't have still
any side lobes but at 1172 hertz we are getting the side lobes. Although the frequency
difference beamforming is a nonlinear technique, there are some similarities with the linear
technique.
>>: With a super heterodyne radio receiver typically you have a very large bandwidth and
inside there there is some signal that you want to recover and you expect that the majority of
the radio spectrum is not very much information and a very small bandwidth there is that
information that you want. It's often not feasible to work at these frequencies directly, so
instead you use any media frequency whereby you introduce a mixer and that effectively shifts
the spectrum down to some intermediate frequency. It may not be a baseband, but maybe
something else. And that's effectively a frequency difference kind of problem. You're
introducing another kind of modulation that shifts your frequency interests. On that you can
then apply beamforming techniques or [indiscernible] techniques that already were in
existence. I'm not sure I understand the difference between doing that and what you are you
doing here. They seem to be the same thing.
>> Shima Abadi: I guess in that case if you are shifting the linear term of signal to the lower
frequency. You are not making it nonlinear. You don't have the product, right?
>>: It's a quadratic in there. There's a product. You are multiplying by a local spacer and so
that then becomes quadratic.
>> Shima Abadi: Yet. This is actually a good point and it has been mentioned before, but I
didn't compare that with the frequency difference beamforming yet. I'm not sure if I can
answer correctly right now because I don't have enough information about the technique, but
maybe we can discuss that later.
>>: Like to ask the question to make sure I understand. These delta F’s, these are the
differences in frequency between the sub bands in each channel that you are multiplying
between?
>> Shima Abadi: Yes. These are basically the omega 2 minus omega 1 that I showed in the
previous slides.
>>: Okay. So for example, 1500 Hz you are taking something at say 20 Hz and you're
multiplying that by something at 1520 Hz?
>> Shima Abadi: Exactly.
>>: So actually, maybe, it's interesting that you're the modulating kind of one sub band by
another sub band. That could maybe make a difference between using like an L O [phonetic]
which is like a set frequency and -- you're doing something a little bit different here because
you're using like other sub bands as L O’s for other sub bands. Maybe that can be something
else.
>>: I don't know these very well, but in a synchronous receiver you use its own camarilla to
demodulate itself and that is very similar. There are some similarities there.
>> Shima Abadi: Now that we have a beamforming technique which works, I want to apply
beamforming output to synthetic time reversal to recover the source signal. This time, you
remember, we had this wave function for the original synthetic time reversal, which comes
from the plane wave beamforming. For this case I need to change the wave function to use the
nonlinear wave function, which depends on the received signal itself and if I do this change and
apply synthetic time reversal I can recover the source signal. This is the original source signal
that I send. It's 60 milliseconds chirp and this is one of the received signals. It had only 57
percent correlation with this one. You can see the distortion. Now if I use this wave function,
apply synthetic time reversal, I can recover the source signal. Now I have 98 percent
correlation with the actual source signal. Now we know that it's working in simulation. Let's
look at the experimental data. The experimental data that I used was for underwater
communications signal. I had the exact same receiver. I had 16 channel with same distance,
element distance and everything was the same. If I do the plane wave beamforming I get this a
dashed line here, exactly like simulation. It's not helping at all, but if I use frequency difference
beamforming I get these two peaks. I should say on beamforming the signal after the red line
here because we don't have enough information about the environment to know the exact
arrival angle to make sure that we can see beamforming is working. But you see this noise
pulse here? That's because of cable slapping and we have the time difference of the arrival of
different channels so we can catch await the arrival angle which we know is at 25 degrees. If I
beamform that when I get a peak at 25 degrees it's for this pulse I can say now the frequency
beamforming is working in our experimental data. If I use synthetic time reversal for
experimental data, this was the original sound signal that was sent. This is the received signal,
one of the received signals has only 48 percent correlation. Now I apply synthetic time reversal
to these signals and I get 92 percent correlation, so it's a big improvement for the experimental
data. If I want to conclude my talk I will show you that blind deconvolution with synthetic time
reversal is possible when beamforming works. And I showed that frequency difference
beamforming can be used with a sparse receiving array to resolve the arrival angles when plane
wave beamforming fails to do that. Now I just want to talk a little bit about future work.
Frequency difference beamforming is a new technique; at least in acoustics it's very new. I just
applied frequency difference beamforming to uniform linear array, but one research topic can
be applied these techniques to non-uniform linear arrays, like what we have in Kinect. Maybe
we can improve the beamforming output for that by using this technique. Or apply it to
nonlinear array, like the spherical array. Also, I would like to know the number of microphones
that we need for frequency difference beamforming to work. What if we have just a few
microphones, a couple of microphones? The other research topic is are we able to remove side
lobes in not very large d over lambda here when we are resolving the angles but we get side
lobes? Oops. I don't know what happened. The presentation just went out. Anyway, for not
very large -- now it's logging off. I didn't have anything. I just wanted to say that for not very
large d over lambda how we can use frequency difference beamforming to remove the side
lobes and how it works better or worse compared to MVDR or other nonlinear beamforming
techniques. And I just wanted to thank my collaborators in this research study, Professor
Dooring [phonetic] from University of Michigan and Doctor Sung [phonetic] from University of
California San Diego and thank you all for your attention. I welcome any questions. [applause].
>>: I was wondering about that synthetic time reversal. Could you remind me how that exactly
depends on the beam former weights?
>> Shima Abadi: We need the weights to be able to get the correction phase be linear in
frequency, the extra term to be linear because if it's not linear then we get phase distortion. If
we choose a not appropriate wave function then we get phase distortion. That's why it
depends on beamforming.
>>: Another question I had on that is what happens when that magnitude, the average
magnitude that you do has zeros or has very small values? Doesn't that transfer function has to
then blow up when you have those zeros in the denominator?
>> Shima Abadi: We take the magnitude and then we square, right? Compared to, so if that
case happens only if maybe for just one receiver. I don't know when that happens because we
take the magnitude and then we square that and so it's adding the values.
>>: You take the magnitude and the new square and then you take the average over, do you
average overall channels?
>> Shima Abadi: Overall channels, but we take the magnitude first. So it's not like having a plus
and minus to be canceled.
>>: But that magnitude could be zeroed eventually, right?
>> Shima Abadi: Then each received signal has a small value too so the top and bottom they
both are small. I didn't, you know what I mean?
>>: Oh, I see, okay.
>>: You know what I mean? The ratio is going to be…
>>: I see.
>>: Going on from that, the problem that usually comes up with room acoustics when you have
a single source in a single receiver, if you were to take that channel and then plotted it in with
zeros on both plots, in many cases those zeros lie outside the unit circle and so if you wanted to
completely remove those zeros by placing holes in the place, you end up with a filter that is
unstable. There is no solution in that case with a single channel phase. If you have multiple
observations and those zeros don't lie in the same position, then you no longer have that
problem.
>> Shima Abadi: You are absolutely right. I studied this performance of synthetic time reversal
for a number of receivers that I need in simulation and it shows that I need at least four or five
hydrophones depending on the situation, but I need more than one receiver to be able to cover
the source.
>>: In room acoustics the definition of how common two zeros is is not [indiscernible]. You
have two zeros but quite close to one another, then what results is that the filters that you
design have a very large white noise getting into those frequencies. So if you had a microphone
that had no self noise then you would be able to recover it perfectly, but then there are issues
that when the microphone does have its own noise you start introducing very large amounts of
gain in those frequencies. So that's a fundamental limitation that we usually came across.
>>: I missed at the beginning of your talk about how easy [indiscernible] if you have multiple
sources, so currently what I heard and saw is one source and some kind of noise and deflections
and in general what we call a reverberant environment. What if I have two separate sound
sources?
>> Shima Abadi: Two sources that are sending the same signal were different?
>>: Uncorrelated different signals.
>> Shima Abadi: You are talking about synthetic time reverse, right? That technique? Then we
need to remove two different phase sources. We probably need to separate the source first
and, I mean, we need to recover the source signal separately. I don't know why I didn't work on
multiple channels yet and I don't know if we can apply synthetic time reversal onto multisource
sound sources, but maybe by finding a correct wave function which has some of both source
phases and an extra linear phase and linear frequency. I should think about that.
>>: The other question is how robust is this approach to noise?
>> Shima Abadi: I studied that for the lab experiment that I showed you. I increased the level
of noise. It seems that for the experiment that I showed you was we had 14 dB SNR and I was
able to go lower like nine, eight, but after that we need to have, I studied that, we need to have
more receivers to be able to recover the source signal for having at least 90 percent correlation,
but the performance decreases as a function of and we have lower signal-to-noise ratio.
>>: For the underwater acoustics maybe coordination is a very good thing to measure, but in
acoustics in the real environments, typically we use some different measures. Have you ever
tried signal correlation or cross speech signal and tried to use [indiscernible] and so forth.
>> Shima Abadi: Instead of correlation, what measures do you?
>>: Okay. This is for human speech. In general, there is a subjective [indiscernible] perception
of how good is the quality, how understandable are the values of objective and subjective
measurements? Have you ever tried this?
>> Shima Abadi: I don't see any limitation for that because it's not just for underwater. The
experiment I showed you was airborne experiment, so it's a good researcher study. We should
try that. I don't see any limitation.
>>: I don't see limitation, but I just asked if you tried it?
>> Shima Abadi: No. I haven't.
>>: In the experiments that you did you had these narrowband chirps, 14 to 17 kHz you were
saying?
>> Shima Abadi: For the experiment we tried?
>>: Yes, for the underwater experiments. And your quality measure is correlation. What
ultimately are you doing with the signals after you have done your deconvolution?
>> Shima Abadi: With the signal coming from…
>>: Yes. Once that comes back what is your use, what would you do with it?
>> Shima Abadi: As we discussed earlier, underwater communication for submarines, for
example, they send the signal and the other submarine may receive the signal that is distorted
and they want to know what was the original message. So they are particularly looking that
signal coming from synthetic time reversal. What was the original message? After coming, you
know, underwater is like a sound channel. We get reflections from the surface and bottom and
the signal recorded far, the soundtrack can travel very far underwater. So after a couple of
kilometers when underwater the signal that you received is distorted. We want to know what
was the signal originally.
>>: So that chirp is a synthetic message?
>> Shima Abadi: Exactly.
>>: So you are just using that in place of some modulated signal that is used to convey some
information?
>> Shima Abadi: Yes. For the second, for the underwater experiment they sent un-sequenced
signal. It wasn't a chirp it was un-sequenced signal, but the first one, the simulation I sent a
chirp signal.
>>: Okay. So in the end sequence you ultimately used to estimate an impulse response?
>> Shima Abadi: Yes.
>>: So you are using blind deconvolution to clean up the received signal from which you then
applied another supervised system identification in the sense that you are using m sequence
you know what the original was and what received was? And then that then gives you another
estimate for the channel.
>> Shima Abadi: Yeah. The beauty of synthetic time reversal is that we can get the original
signal and all the impulse responses at the same time, so we can recover the impulse responses
we have been information about environment. This is correct as long as the transfer function is
not changing. It's constant. If you have a long signal, sound source signal and we have wave,
the surface wave underwater and then the your [indiscernible] is changing constantly. But if
you have a calm sea then we can use this technique even for the long signal.
>>: I'm not sure I understand the application. So you are sending a chirp or you are sending an
m sequence and then you are doing your very best to evolve it to the point where you have a
high correlation between what you sent and what you received?
>> Shima Abadi: Uh-huh.
>>: Saying there's no information, I mean you have already removed all of the reflections and
reverberations. Why would you send an m sequence if the very point of what you are doing is
to remove reflections?
>> Shima Abadi: In the experiment we already know the original signal, but in real work we
don't know the original signal. The purpose of synthetic time reversal is to find the original
signal. You know what I mean? In the experiment we know that it's an m sequence with that
specific criteria, but in real work we just have receivers. We receive signals and we want to
know what was the original signal.
>>: Okay. If you use the m sequence then the residual, the things that you weren't able to
receive are the things that count. Is that it?
>> Shima Abadi: Yes.
>>: You would send that as an evaluation criteria?
>> Shima Abadi: Yes, exactly. For this technique I need to have something to be able to say
that it's working, so for the experiment I knew the original signal. But in the application of this
research is when we do not know the original signal and we want to recover that.
>>: Have you done any experiments with non-stationary signals, or natural signals like
[indiscernible] locations?
>> Shima Abadi: I applied this technique to marine mammals and they are moving. You are
right. But the movement was not, we can assume the movement is not that much big, so we
assume it's a stationary. But if it's moving very fast, then we can assume that.
>>: And were you able to recover these pretty well?
>> Shima Abadi: Yes. Actually, an extension of that was wave localization. I used synthetic
time reversal to find a location of marine mammals underwater. That was a chapter of my PhD.
>>: If the purpose that is for localization, then beamforming need not necessarily come into it.
I mean if you have a pair of receivers and say you are able to work up a correlation between
them then you can estimate the time delay of arrival.
>> Shima Abadi: We can estimate the time.
>>: Yeah. And then if you had introduced the third receiver then you can estimate again the
time difference of arrival. And then you can create, for every pair you can create a locus of
potential source locations. And that then isn't subject to [indiscernible] problems. It doesn't
matter how far apart you have these receivers. If your plan is only to localize the sources as
opposed to extracting the source, then…
>> Shima Abadi: You are right, but these are two different problems. We want to find a
location or we want to find the source signal. When finding a source signal we need
beamforming, but for finding the location you are right. We are not ever in that range of d over
lambda because wave frequencies always are lower, so we don't have that problem. The
problem that I talked about today was for the communication signal that on high-frequency. So
there are two different…
>>: If you use blind deconvolution for the source localization case?
>> Shima Abadi: Exactly. It was an extension of that, so we need some other information too
for localization. For recovering the source signal we didn't use any information about the
environment, but for localization we need some information. So it was another project, an
extension of synthetic time reversal for localization, but for recovering the signal we just need
beamforming to work.
>>: When you do this frequency difference, obviously, there's finite frequencies [indiscernible]
bandwidth you can use, so do you like wraparound and when you hit [indiscernible]
>> Shima Abadi: No. Actually, that's the reason we need a broadband signal because when we
are close to the end of the bandwidth then we get to where we have zero after that. For the
delta F at the end of the bandwidth we will have zero, so that's what we are losing in
localization for that part. But since we have a broadband we are okay.
>>: So is it the case that as delta F increases you have less and less props, multiplications
between frequencies?
>> Shima Abadi: We have less or more?
>>: Less, so like if your nexus bandwidth is a 2000 and let's say that your delta F is 1000, so you
can go like zero and 1000 and then 1000 to 2000, but then if your delta F is 2000 then you can
only go like zero and 2000, so you have less cross multiplications.
>> Shima Abadi: It's not working, but it's not working in that case we are not able to resolve the
angles. We need a bandwidth to take the average.
>>: So you need sufficient bandwidth so that as you increase delta F you just need a higher and
higher bandwidth?
>> Shima Abadi: Yes.
>> Shima Abadi: Any other questions?
>>: If you just had a source signal that was just a single spot frequency then as soon as you go
into that region of heavy [indiscernible] there is nothing you can do? You have to have some
bandwidth or it won't work?
>> Shima Abadi: Yes.
>>: The fact that you are able to perform some degree of averaging is what makes it better?
>> Shima Abadi: Yes. Because the main part of that was taking average over frequencies. If
you have just one single frequency then you have a problem.
>>: Okay.
>> Shima Abadi: Thank you.
>> Mark Thomas: Thank you, Shima.
[applause]
Download