>> Ivan Tashev: The talk is white hot in the [inaudible] industry. Can we have places between Wi-Fi, white spaces in wireless microphones. And without further ado I'll introduce the two speakers for tonight. This is Ranveer Chandra. He's a senior researcher in Microsoft Research, our research here in the mobile computing research center. As pretty much in this building with this bank, he has published multiple papers and has multiple patents, to be precise more than 40 patents, 12 of which are granted, and he's basically has multiple awards. One of the most prestigious is being one of the 35 innovators granted by MIT Technology Review every year. The other speaker is George Nychis. He is a PhD student in Carnegie-Mellon university and Washington here Microsoft Research. Just to let you know just [inaudible] Microsoft Research is [inaudible] and he has [inaudible] twice. [inaudible] so they will present their work without further ado, and they will have the floor. >> Ranveer Chandra: Thanks for coming over. Ivan said I'm Ranveer Chandra. And we have been working on the Whitespace project for a very long time, since 2005 is when we have been working on it. And this word is jointly done with George, Thomas Moscibroda. He was my collaborator here. Here he moved to MSR in China. We still collaborate a lot. And Ivan. Ivan taught us all of the audio parts of what I'll be talking about today. I'm mainly a wireless guy. Audio is something that I'm learning because white space did impact the audio industry significantly. And we had to consider this problem in order for the rules to be adopted. So by the way, this is a small crowd. Feel free to just raise your hand at any time and ask questions. This is -- most of the talk is wireless. I can understand if you don't have the background, but don't think it's a stupid question. I can -- I'll try my best to explain it. So first I'll begin by motivating the need for white spaces. I know from your perspective it's all evil, it's going to destroy the wireless industry both the audio industry. From our perspective, from wireless guys, this is big. White spaces is something which -- which we have been struggling for a long time. And why is it important? The reason is there is this called a big spectrum crunch that the FCC in its broadband plan called the impending spectrum crisis. That is the amount of good spectrum that can be used is limited. Qualcomm, for example, if you ask them there's a report which headed where they say the best part of the spectrum that you can use for wireless communication is from 300 megahertz to 3 gigahertz. That is the amount of spectrum that is good for mobile communication for devices that are mobile. So the amount of good spectrum is just that much. And within that, over the last few years, there has been a huge proceed live regulation in mobile devices, phones, tablets, whatnot. Not only that, the amount of media demand, the demand for video on these devices is growing significantly. These are just some of the forecasts. And this is iSuppli. And most of the people are watching YouTube and NetFlix. And as you can see, this has been in the news all over. This is Cisco saying that, well, the amount of data traffic will double every year. De la Vega, he's the CEO of AT&T. He says well, something needs to happen soon. We are running out of spectrum. And this is something which is all over. Even now you'll keep hearing this more and more, reading about this in the news, that the amount of usable spectrum is limited, and we need to do something more. So CTIA, which is the organization of their carriers, they approached the FCC and said, you know what, give us 800 megahertz of spectrum. That's what we really need by 2015 in order to meet user demand. The FCC said 800 megahertz between 300 megahertz and three gigahertz is a lot of spectrum to ask for. So the FCC pushed back and said we'll lose something and try to get you 500 megahertz by that time period. Right. The interesting thing is if you look at the next chart, this is a spectrum allocation chart for the U.S. this is from 3 kilohertz to 300 gigahertz. Nearly all of it is allocated. Where does the FCC get 500 megahertz from? That took from 300 kilohertz to three megahertz. 300 megahertz to three gigahertz. Where does it get that spectrum from? So that is where we started investigating a new technique. This is back in 2005 when there was no rules around it. We said that instead of statically assigning spectrum for purposes for -- so for example assigning certain part for TV, certain part for cellular, can we come up with a new spectrum use paradigm? That is we'll figure out which part of the spectrum is not in use and use that for unlicensed communication. And the first attempt that we made was at the TV spectrum. This is the broadcast TV spectrum where wireless mics operate as well. So in this chart -this is log scale, by the way. Each row is log scale. In this chart, these are the portions where the broadcast TV exists. And this is where Wi-Fi operates right now. So there's not to take the absolute measure, but still there is more than three times the amount of spectrum that is used by broadcast TV. So the first question we asked was can we use this TV spectrum, that is not in use by TVs at that time, for wireless communication? For unlicensed communication? And this is -- the reason we went after that is there is this analog to digital TV transition that has happened in the U.S. and is happening worldwide. In the U.S. it happened in summer of 2009. Japan in 2011 and so on. Every country is moving from analog TV to digital TV, just because it is so much more efficient to send data over the TV -- over digital TV. So the FCC in 2008 it announced its historic ruling which said you can use the unused TV channels or for white space for unlicensed communication. And this was, as you can expect was in the news all over. Every newspaper you could read talked about this as the next big thing. Because now, all of a sudden you had so much more spectrum to connect to the Internet. I know people are not talking about that much about wireless spikes but we'll come to that in a bit. So before I get into that, what are the white spaces? Just defining it more clearly. So suppose this is -- the X axis here is frequency. That's where Wi-Fi operates, 2.4 gigahertz and five gigahertz, 802.11g and 802.11a in the five gigahertz spectrum. This is from DC. So -- and this is where TV exists. VHF up to 216 megahertz and 470 to 698 essentially. That's where UHF broadcasts exist. And this spectrum is where right now all the wireless makes in the U.S. operate. The interesting thing is even though this frequent's allocated for TVs and wires mics, not all of it is in use at every place. So what we did was we took this spectrum, analyze this spectrum analyzer. We took it to one of the houses. This was an intern who was here back in 2006. She took the spectrum analyzer to her place. She measured the spectrum occupants across time. And this is what we found. This is from -- this is a UHF part of the spectrum. And a lot of the spectrum, even though it's allocated for TV broadcasts is not in use. This varies at different places. In Redmond it is certain channels. If you go to Seattle it's a certain other set of channels that are not in use. And these are what are defined as white spaces. That is allocated but not used TV spectrum is what we call as white spaces. To give you some more background about white spaces, they are 50 TV channels. Each channel is 6 megahertz wide in the U.S. most other places, in Europe and Asia, it's 8 megahertz wide. But most of the other principles still hold. So then the next question -- I just defined what are white spaces. The next question is why should we even care? Why is this so school for the wireless industry. So the most important thing is, well, this part of the spectrum offers, well, much, much more spectrum to send data bits on. There is a spectrum crunch going on. Given that crunch, getting any more spectrum is very, very useful. And over here you could get up to three times that spectrum. That's up to chose to 300 megahertz of spectrum can now be made available, depending on where you are for data communication. So just having much more spectrum is so cool. But the bigger advantage is that in this part of the spectrum, because it's in lower frequencies you get much more range. Signals propagate much, much more. Theoretically, it's about four times the range of Wi-Fi. We've done measurements where it's even more. I'll get to those measurements in a bit. So combine these two properties. All of a sudden you can see of so many new applications which until now were not possible. For example, rural broadband, city wide meshes. Google deployed the city wide mesh in mountain view. It wasn't a big success. The big reason is you put up a router on the lamp post and you don't get a signal inside the house because the windows had huge amount of shielding. The 2.4 gigahertz signal just won't propagate as much. In the lower frequencies your signals just get into the house much more easily. So that's why you can see why the wireless industry is so excited. Carriers are excited because now they think now they can offload a lot of the data from the phone all to the white space. Then you can think of many more scenarios, like people can deploy city wide networks to support an alternate voice network, Skype related something. Right? So there are many applications. Just has to start running their mind and they can think of -- they can come up with many new applications in this part of the spectrum. So with this in mind -- so we know the benefits of this part of the spectrum. So we went out -- this was the first question we asked is, okay, what does it take to build a network over the white spaces? Suppose we took -- set up a tower. So we had for example those two devices sitting right there, they are both white space devices. So what does it take -- so these are software defined radio base. We are still getting to the async design. But then the question was once you have these devices, how do you really form a white space network? What are the challenges? So that's what we set out. We set out asking this question long back. And the first thing that arose was why not just reuse Wi-Fi based solutions? Why won't Wi-Fi work as is over the white spaces? So there are three key differences. I won't dig too much deep into it, because I know you are not wireless people. But if you're curious you can ask me any questions here. So the first difference when compared to Wi-Fi, why the Wi-Fi spectrum and white spaces is that this spectrum is fragmented. In Wi-Fi -- I don't know how much you know about Wi-Fi, but you have channels 1 through 11, right, in 2.4 gigahertz. A lot of you must have configured your router to operate on one of them. You can operate on any one of those channels, and that's contiguous part of the spectrum. In white spaces, portions of that spectrum might be occupied by TVs or by wireless mics. You cannot use that spectrum. So the spectrum in turn is fragmented. You don't have one chunk parts of it -- so for example there's one chunk which is four channels wide, one which is two, one which is one, one channel wide. Each channel, as I said before, is 6 megahertz wide. So this is what we call as the first difference, which is fragmentation. Right? And we measured this fragmentation in different parts of the U.S. blue is urban. This is -- red is suburban, green is rural areas. As you can see in urban areas there are lots of TV channels. So there are lots of small fragments of TVs -small fragments of channels that are available. Either one fragment wide or two fragments wide mostly. In rural areas, where there are very few TV channels, you have these chunks which are more than 6 channels wide which are not in use, so available for white space communication. So because each channel is 6 megahertz wide, we might need to use multiple channels for communication, which is different from what Wi-Fi does. The other key difference between white spaces and Wi-Fi is that there is spatial variation in spectrum availability. In Wi-Fi, if you have your laptop or your phone, it can talk to your access point on any channel. Certain channels might not be as good as others, but you can operate on any channel. If I configure my router on channel 6, my client can definitely find it. Although white spaces certain channels are blocked. The laptop for example cannot operate on channel 4, because it's in range of the TV tower. So if I set up my base station to operate on channel 4, I will never be able to connect to that laptop. This is another problem that needs to be addressed. And the third problem is that of temporal variation. That is this is where wireless mics come in. Suppose you have a base station and a client. Both of them are operating on channels 2 and 3. At a certain amount a of time, someone switches on a mic. Suppose I'm giving a talk here. I switched on a mic. I'm operating on channel 2. This channel is unusable. So what this means is at this point this channel would be unusable, the system needs to reconfigure and operate on another part of the spectrum. This can arise because a mic comes up, because a client is mobile. I might be mobile. I enter a range where certain channel cannot be used. So that's where this problem can come up. So to solve this problem, we have been -- we have been looking at this from -we have been looking at the use of cognitive radios. So cognitive is a very overloaded term. So Victor was at the back. I wanted to introduce Victor. Victor is the manager of the group. So Victor has been involved with this project right from the beginning. Yeah. No, thanks for coming. Yeah. So to solve this -- so in order to form this network where the spectrum has these three fundamental properties which are different, we came up with this idea of using cognitive or smart radios. So the idea is as opposed to existing radios, here your laptop and your phone, they will determine which parts of the spectrum are available at each device. So my laptop, my phone, everyone with scan figure out what parts of the spectrum are available. Determine which part of that is commonly available at both the communicating nodes, both the devices that want to communicate and use that spectrum for communication. So at Microsoft Research we have been looking at the networking challenge. Not really building the devices much but once you've built the device, how do you communicate? So how should nodes connect? How do they discover one another? Which spectrum band should they operate and what's the center frequency, what's the channel width, how long should you occupy a channel. Once you start using a channel, how long can you be in that channel? Which protocol should we use and then prove the optimality of that. We showed that you cannot do any better than that. And this project, as you can imagine, as I said, since we've been working since 2005, it's gone through several versions where we solved different sets of problems, published different papers. Those are the ones in black. And they're still going on. And we've had significant impact on policy and so on. So in version 1 what we looked at is we looked at forming a mesh network. If you were to form a mesh network, how would you go about doing it? Most of this work was in simulations. In the second version, we looked at forming an infrastructure based network, that is you have -- you placed one node, one access point. You have -- we build a six device network on one foot of the building. You place the node, have five nodes connect to it, looked at what the challenges were. How would you build the most efficient network? This is a small prototype. In version 3 we took our lessons from here and decided to go out in the wild, build the world's first white space network. So in our campus, which is one mile -- one square mile, we deployed multiple base stations. And I'll talk about that scenario. And in version 4 is what the rest of today's -- most of today's talk will be on, is stretching this to the limit. How can we be more optimal? How can we co-exist with wireless microphones. So as I said, I'll just talk about these two, the first one very briefly and then get into version 4. So in version 3 this is the first white space network that we built. These are just some of the details. In fact, if you would have come during the day, I could have shown you the demo. The reason during the day is this is running on our shuttles, the campus shuttles that run within Microsoft. So this is the Microsoft campus, most of it. And this is one mile by one mile. There are more than 70 buildings. And there are these campus shuttles that take people from one building to another. This is for meetings. And there they -more than 200 shuttles that separate within Microsoft. And right now there is no way to connect to the corporate network from these shuttles. In a research project in our group we did try to use Wi-Fi to connect. So just deployed multiple Wi-Fi access points on different buildings. On every building we placed four different [inaudible]. But even then we found that we had huge number of these decade spots where we couldn't provide coverage. Over the white spaces we tried to see we can leverage white spaces to solve that problem. So for that we were among the first to obtain an FCC experimental license to send and receive over these -- over the spectrum. And these are some of the pictures since I cannot show you the real demo right now, these are some of the pictures. That's one of the antennas we've put on one of the buildings. Two more. And the antennas on the buildings are essentially connected to an Ethernet backhaul. At the back of the shuttle, this is one of the shuttles we have an antenna. This is a huge antenna. The reason it's huge, this is for experimental purposes. You can idly build it in some smaller form. This antenna talks to the tower. And we have another cable that goes from behind the shuttle, attached to the hitch, next to the driver. This is the white space device. This is connected to a Windows 7 laptop which serves as an access point inside the shuttle. So people inside the shuttle can connect via Wi-Fi to the Windows 7 laptop. The packets are then sent over the white spaces and this is how people are connected can. So there are many challenges that we had to address in order to get there. One of them is designing the hardware itself. We used our hardware, we've also used other people's hardware like those inside the shuttles. So this is just one sample hardware. How do you -- so this is a very -- using off-the-shelf components we built a device that could send and receive packets over the white spaces. Just to give you some -- this is a very -- just one slide. We have an entire paper on this. But -- so this is how it works. We have a Windows PC which has a Wi-Fi chip set. We modified the driver to change its channel width to 5, 10, 20, 40 megahertz. This is use to multiple TV channels if they are available. So this what we did is we attached a software defined radio which takes the raw time samples, pushes it. We do the [inaudible] here. We detect whether there's a TV or mic on that wireless mic on that channel. It goes to a connection manager which then configuration a white space radio which essentially a front end attached to the Wi-Fi card to operate on a certain TV channel. And this is what it looks like, one of the devices. That's another device. There are a few more compactors that we've tried. And that's the huge antenna attached to it. So the other thing we did was we built our own version of a white space database. So this is on this website whitespaces.msresearch.us, you can go on that right now. You can put in any address here, click on find address and then click on show nearby incumbents. It will tell you all the TV stations that are available, all the white spaces that are available. We also have APIs that allow you to introduce a mic in the database. So the way it works is every white space device in our system right now, it constantly keeps following the database to see which channels are available. And one way -- so when we built our campus network, the Microsoft network is a hardware to wireless mics. Lots of them are in use. So we gave this API to the mic -- mic people here. They could introduce -- if they faced interference at all, they could introduce their mic in our database using a command line API. During the break, we could give you a demo of that as well. And the way this database works is it takes this input, the TV and mic data from the FCC, whatever is registered, takes a location as input, which is the address, and takes the terrain data from NASA. Then we republic it through our own complicated, more complex propagation model and the output is essentially at that location what are the white spaces that can be used by the white space device? Yes? >>: So in your database, you have entered local use of the [inaudible] phones on the Microsoft campus so those frequency are permanent ->> Ranveer Chandra: No. We -- no, no. If we ever did that, we would be blocking out all the TV channels here. >>: Yeah. >> Ranveer Chandra: Because a huge number of wireless mics on campus. >>: Yeah. >> Ranveer Chandra: In this building itself we have 38 wireless microphones. >>: Yeah. >> Ranveer Chandra: So this is only when you are either using the mic -- so right now we had -- we had asked the people to introduce the mic in our system only if they face interference. And we'll show you what interference sounds like. >>: This is a dynamically updated ->> Ranveer Chandra: Yeah. >>: [inaudible] okay. >> Ranveer Chandra: So this is a dynamic -- I could update it right now, and you'd see a channel that ->>: Okay. Thank you. >> Ranveer Chandra: So the -- so this -- we verified that our database gives accurate results. We had our intern -- this was not George, this was another person with George, Rohan Moty [phonetic], he was from Harvard. He was super passionate, just like George is. And he decided to show us that his -- the model that we were building was actually valid. So he drove more than a thousand miles over a few days and he would stop every 6 miles, step out of his gear, measure what the signal looks like. And this is -- this is data over 57 points. As you can see, this is going all the way to the Olympics and the other way. He would just drive in the middle of nowhere. Got pulled over by the cops twice. [laughter]. Yeah, it is scary coming out with all the equipment in the middle of the forest and saying what are you doing here? But in the end, what we used these measurements for to she was over here. So what this shows is the contrast of the model. How many white spaces do we lose? And this is what the prespace propagation model would give us. This is where we are. And to the left is better. And what we saw was that, well, our database is much better than what the FCC mandated -- mandated rules say. And we -- we -- we fed these back to the FCC, and they are looking at it. Another problem we had was where do we place the base station? So what we've done is this is one of the buildings. This is building 112. This is where 99 is now. At that time it was all like dug here. But right now there's a building here. This is our building. This is where we have one of the base stations. This is where -- this is the entire route we actually drove the shuttle, we were collecting the GPS reading every second, along with the signal strength. And here what we show is the actual range that you would get over the white spaces. All right? So what you see is the green over here is UHF, the red is VHF. The X axis is the distance from the base station and the Y axis is the signal strength. As you can see, up to 700 meters, even beyond, you can get connected to a base station over the white spaces network. With Wi-Fi we got -- we didn't across more than hundred meters. So this could just lead to a separate kind of a network insulation. So that's why the spectrum is so awesome. And our work over here, this is including the mic work, has gotten a lot of press. It was -- we had lots of people come in, take our interviews, Wi-Fi got lot of press. And it has a lot of regulatory impact as well. We had the FCC chairman come here, the TRA, Telecom Regulatory Authority of India, he came here to see our demos. He saw the mic demo that George will show in a bit, along with the shuttle demo, the campus shuttle demo and all of that. So we've had a lot of impact. So now switching to version 4. I'm sure this is the part that you're most excited about. But before going here, I wanted to give you a bit of -- a little bit of introduction to why white spaces are so cool in some of the work we've been doing. So what -- so now -- I don't know how many of you know, but the FCC passed a second ruling in -- this was last -- September 23rd, 2010 where for wireless mics it essentially reserved two TV channels. It said either side of channel 37, the first 3 channel is available for wireless mics. So there are two free channels. And the other thing is the people who are organizing bigger events can register their mics in the database 30 days in advance. You need to know 30 days in advance what frequencies you are using. So why there is -- there are issues for the mic industry? It's also not super efficient for white space performance, the people who were building -- who were advocating for the use of white spaces on licensed communication. The reason is that each TV channel is six megahertz wide while the wireless microphone is, as you'll see in -- later in the demo, occupies only 200 kilohertz of the spectrum. And the FCC requires us to vacate an entire TV channel, 6 megahertz of spectrum, only if -- if only 200 kilohertz of it is in active use by a wireless mic. So that's not efficient. The other problem with this is wireless mics are not used everyone. If I'm in my apartment, it's very likely that no one else is using that spectrum at that location. Why shouldn't I be using my spectrum at that time? And the third and most important point is in big cities we don't find many free channels. Over here, for example, this is a studio that George did, in 39 percent of the cities that we looked at, we didn't have -- if you took away the two channels, we didn't have any free channels for white space -- for white space communication. So what this means is for white spaces to take off, you really need an application. You need people to build the hardware. They'll build the hardware if they see the economics behind it. If they see a use case. And if you are taking away 39 percent of the population right away, or even more, because these are the big cities, this technology won't even take off, it will get killed before the first devices are out there. So that's why these are the three main reasons why we think it's inefficient. That is, first, just for 200 kilohertz we are vacating the entire 6 megahertz of spectrum. Second, mics are not everywhere, mics are only used in certain locations. And the third is there are not enough channels in big cities. On the other hand, it's all -- yeah? >>: If I got this straight, then instead of 6 megahertz if they were just to say no you don't have to abandon 6 megahertz, just abandon a 30th of that, you'd be okay; or do you have side band issues? >> Ranveer Chandra: Yes, we do have some side band issues. >>: So all right. Let's say a 10th then ->> Ranveer Chandra: Yeah, yeah. That would still be better. >>: That would be ->> Ranveer Chandra: That would still be better. >>: And so what was the argument against that? Why not? >> Ranveer Chandra: Why not? No, that would be a good first step. But the thing is that still doesn't solve the entire problem because of the side band issue. >>: [inaudible]. >> Ranveer Chandra: Yeah. Because even -- it would mostly solve the problem, but even then, even one-tenth, you're still losing out the rest -- losing out on the rest. And the biggest problem is when I'm right next to the mic. The mic is here and I turn on my device. That's the worst case. >>: [inaudible] a little bit of a misnomer to say, well, look, it's only 200 K, because it's ->> Ranveer Chandra: Yeah. >>: You know, [inaudible]. >> Ranveer Chandra: Yeah. So that was the issue with the wireless industry. That's why we would like that problem to be solved. From the perspective of a mic operator -- so we talked to James Toppel [phonetic], who does a lot of mic placement. And he got really -- he doesn't like the ruling because first nearly all mics in different regions will have to be changed because you'd have to adjust your mics to operate only on the frequencies that are available there. And the second, the bigger one that he was concerned about, was for his events, he says that I never know until the very last moment what channel I'll be using. That is even every time he says at every event I have to adjust my frequencies because of some noise that is happening at that location. And I don't know 30 days in advance. Even one day in advance would be difficult for me to know. So he wants existing schemes. He wants to have a dynamic placement. He knows the audit, what the audit looks like and wants to adjust his frequency assignment according. >>: In your first point, though, isn't the rule that there are two channels reserved for microphones but they can also use other channels as available? >> Ranveer Chandra: So ->>: They're not confined to those two channels. >> Ranveer Chandra: As long as the mic -- but they would have to abide by the white space device ruling. So right now they can, they can operate there, but that will all go away. Essentially imagine if white spaces become as popular -- we'll show what you the interference looks like. No one would like to operate their device on a TV channel on top of a white space device. I'll show you. It will be the worst case. You will really close your ears. You'll be like okay, it's that bad. Yeah? >>: How much bandwidth are we talking about for a white space device communication? >> Ranveer Chandra: So white space devices would communicate oddly audio, video, data communications. So if you have -- so 6 megahertz is one TV channel. >>: Right. >> Ranveer Chandra: Right. So ->>: How much does a typical TV VD bandwidth take up? I mean, [inaudible] take 200 K. How much does ->> Ranveer Chandra: Oh, no. So phones are different, right? So if you look at LTE that is coming out, 4G, LTE, that's 5 megahertz of spectrum. >>: Okay. >> Ranveer Chandra: Wi-Fi is 20 megahertz. So over here, even if you use 5 megahertz, you'd be -- and you leave one megahertz for guard band, so you're using up the entire TV channel. So given this, our goal was -- well, it's short. We want to enable the white space devices to reuse the TV channel that is already in use by a mic. We want to do this route rig any changes to existing mic deployments. We want to make your life simpler. We don't want existing mics to change while ensuring that white space devices with use that channel without interfering with the mic. On in the rest of this talk, first, I will present how the RF interference from white space devices impacts the audio quality of a mic recording. So this is if a unique set of [inaudible]. And then in the second part, George will talk about SEISMIC. This is a protocol we built and the system we built on top of it that allows this kind of communication to happen. We leverage the studies -- the measurements that we got from our study to build this protocol. So let me begin by giving you some ground of what an analog microphone, wireless mic looks like. So it's essentially a one-way communication like this device that I have right now. This is a transmitter. And the receiver, essentially -- there are [inaudible] here. The receiver is inside in a rack. All the mic receivers are sitting there. And the mic receivers themselves are on transit. It's a transmitter -- it's just one-way communication. There is a main carrier, this one. And there are two squelch tones around it. The main carrier carries the data. The squelch tones essentially whenever there is interference that is higher than the squelch tones, the audio is usually suppressed. Most mics suppress it. That is all you will hear is silence if you cannot see the squelch tones. And mostly the difference between the squelch tones and the main carrier is about 30 dB, okay? This is -- this is we found it standard across all mics. And in our studio we used 6 different mics. As you can see the most popular ones, Shure and Sennheiser are there. And these are all stuff that we got from -from the equipment manufacturers themselves. And this was a unique experimental setup. Before our study, most people, when you ask about interference with mics they would talk about RF interference. They would say, okay, in RF, how does my device interfere with the wireless microphone? Instead what we decided to look at was the audible interference. And this is something we came up with a setup where in an anechoic chamber we had a PC connected to the speakers. We had a wireless mic next to the speakers. That was sending data to the -- essentially there was a mic receiver here which send the data back to the PC. And we computed the PESQ value to look at the difference in the audio recording. Any audio questions Ivan will answer. Ivan really helped us with this setup, to come up with this. And then what we did was we had a white space device with an RF attenuator. You see the RF attenuator there, the white space device? And we controlled the amount of interference that was input to the mic receiver. It went directly to the mic receiver. And in order to prevent any other RF interference to the mic receiver, we had it inside the [inaudible]. So all this setup was in the anechoic chamber. The initial plan was to take you all there, but there was -- there's already something going O that is a very unique setup as well. And using this, we played a huge corpus of audio samples, and we would record the -- we would play the original to the recording and then see the PESQ value, how did that change? >>: Going back to that, you said [inaudible] can you define an audible in terms of say signal and noise or something of that sort? >> Ranveer Chandra: So audible quality. So this is -- so Ivan, you're the best to answer. But PESQ is what we went after, because that was something we could compute using a computer program. I guess the better way would be to put it up and let people vote to see if they can hear any difference. [brief talking over]. >>: So pretty much the [inaudible] and its recommendation [inaudible] standardizes how to [inaudible] retaliation of sound quality and it's called [inaudible]. Obviously after you ask 100 people to grade the quality between one and five, five is best [inaudible] one is not usable, not audible at all. And then you have [inaudible] and this is the mean [inaudible] so it's a number between one and five which is what we'll think about the quality of this sound. Because this is kind of expensive and takes a lot of time, there is another recommendation for [inaudible] which is called P 86.42 which gives up computational proxy, a signal processing algorithm. You need this program with [inaudible] and it gives you a number between one and five. If you do a [inaudible] less than 30 participants you'd better use the program, which will give you more reliable data. So at least we can do this completely automated and you don't have to call test subjects, et cetera, et cetera, and this is that so-called best algorithm, perceptually relational sound quality. It's not perfect. It's actually very bad. But this is the best we can. >>: So the boundaries for perceptual, no one at this point, is it fair to say, said let's translate this into some sort of numerical. >>: Yes. >>: Okay. >>: And pretty much we know that if there is a 0.1 PESQ degredation this is audible. And if this is 0.2 degredation in the [inaudible] should be able to [inaudible] the difference. So if you keep the degredation below 0.05, this means that the quality is sufficient enough not to be eligible for -- except for some [inaudible] extremely precise use like [inaudible] for example. >>: Were the listeners trained, or were they random people? >>: So the recommendation which [inaudible] gives [inaudible] does not specify the users at all. And PESQ algorithm per se is actually a good signal processing [inaudible] support the machine learning which basically collects the samples from a couple hundred thousand people. [inaudible] mimic their behavior. So [inaudible]. In case they are not specified. >> Ranveer Chandra: So, yeah, so that's why we used PESQ. So that is why every time Ivan is here, I deflect the question to him. So then we went about in this setup to quantify how does RF interference from a white space device impact the audio recording. So here what we plot is the normalized score that is in from one to five. We take the best value, that is the original recording compared with the mic -- wired mic recording. That had the best score. And we plotted the wireless mic recordings just divided it with this value. So one would be -- so the higher you go, the perfect it is. So the first one is interference with varying time. So what we did was we introduced small packets, that is 16 microseconds long. Which is smaller than the smallest Wi-Fi packet that the Wi-Fi protocol requires you to send. And we changed the spacing between that. So 16 microseconds packet every -- say every 500 milliseconds, every one second and so on. And these are the audible pops that we can hear. Even if the packet is just 16 microseconds long. This is the more scientific graph here which says as you changed the spacing, this is in microseconds, how does this value change? Right? So even all the way to the end we can hear audible interference. That is all we hear these audible pops even if they are very small packets sent more than a second after each other. The second is -- the second test we did was interference in frequency. That is around the wireless mic we changed the amount of frequency that we suppress. That is we do not use -- so right now most of the wireless technologies, the way they separate, is they use contiguous spectrum. I use the entire 6 megahertz for communication. Here what we did was we came up with another technology which essentially notched subcarriers in the middle. So I'll -- instead of the entire 6 megahertz, I'll use this spectrum and this spectrum and still carry out the data communication. And we are seeing how well that would perform. In fact, you would see that in the demo. Then the next question was, okay, how much of the frequency do you need to suppress so that there is no audible interference? So we changed the amount of frequency suppression. That's what you see here. And this is again the normalized score. One is perfect. So as you can see, this is for one mic, the Sennheiser EW 100. You needed up to 200 kilohertz of frequency to be suppressed. If you suppressed 200 kilohertz for this particular value, you would hear no audible interference. Notice this is 200 kilohertz of an entire 6 megahertz channel. You just suppress 200 kilohertz, you can use the remaining for data communication. And this is the amount of bandwidth that you need to suppress for different other mics that we studied. In one case it went all the way -- more than 300. But for everyone else, it was below 250. >>: [inaudible]. >> Ranveer Chandra: Must be the -- the BPU. I don't know which. So the physical examination question was -- so I showed you time, I showed you frequency. The third one is power. How does the amount of power, the power that you -- the interference power, how does that impact the interference? So here what we did was we changed the amount of power. So this should actually have been an animation where this went up in steps of 10. We changed the amount of power that we were interfering with at the mic. And what we found is that once the amount of interference power was above the squelch tones, at that time the PESQ value just went haywire. That -- there was a huge amount of interference. Below the PESQ value you hardly heard any interference. And that's what these graphs show. That is as we changed the amplitude, the green is for the squelch tone. The difference from the squelch tone. The red is a difference from the main carrier tone. This is 30 dB apart. Essentially what this is saying is, just one dB above the squelch tone is where we didn't find any -- as long as you just -- just below the squelch tone, the squelch tone is just visible, we did not have any interference in the audible recording. So to summarize, even short packets, very short packets cause interference. So what that means is we cannot completely overlap with a wireless mic in the frequency domain. The second is the frequency, that is the number of frequencies you have to suppress, depends on the distance of mic receiver. Essentially you want to keep it enough separated so that the squelch tones are visible. Similarly the power, the interference, the amount of separation, the amount of suppression that you have, should be such that the interference power doesn't suppress the squelch tone, such that the squelch tones are visible at the mic receiver. So at this point, we'll switch to the demo. And, George, you want to go through the demo? And then we can take the break. And after that, the next part of the talk will be given by George. >> George Nychis: Can everybody hear me? Okay. Hi, everybody. I'm George. So what we're going to do is show you our demo of actually avoiding interference with the microphone and still reusing the rest of the channel. And so what we've essentially done is, you know, taken some of these measurements where we understand what you need to do to reuse the channel without causing audible interference to a microphone and essentially leveraged them to still provide white space device communication for the channel. So I guess I'll just start off by starting the demo. Then people can start to come up, ask questions, things like that, and we can have the break. So we have two devices here. These are the adapter white space devices. So I'm just going to turn the white space device on first. >>: [inaudible]. >> George Nychis: Do you want to get the microphone ready? >>: Don't start transmitting just yet. So this is one mic, this Sennheiser mic. >> George Nychis: So essentially what we're going to do is introduce the white space devices into the channel without suppressing any frequency. They're essentially going to use the 6 megahertz, transit constantly. And we want you to essentially hear what the audible interference caused by the white space device transmissions actually is. >>: So on the spectrum analyzer here, what you're seeing is the mic in the RF [inaudible] so this is what the mic transmission looks like [inaudible]. >>: How does this hand held mic differ in its qualities from the [inaudible]. >>: [inaudible]. >> George Nychis: Yeah. For some of the measurement results we use we actually use mics like this. We use some hand helds and some like these battery pack. >>: So how do we know which one we're listening to when you've got your mic and the receiver ->>: So we -- part of this demo these are 600 megahertz, this is 500 megahertz. So we know which one is ours. >>: Okay. >>: So you can hear this mic. >>: Barely. >>: This will be really bad. So [inaudible] as soon as he starts the transmission, you will see that the mic -- I don't know if you can see this hear. This will get swamped -- this will get swamped with the white space transmission. You'll see [inaudible]. >> George Nychis: Yeah. So essentially now the mic is close so. >>: So this is really bad, right. So this is the kind of interference you'll see if you have a white space device starting to transit but you have a mic receiver. >> George Nychis: Right. >>: But it gets really bad. So then the next part of the demo what George will do is he'll introduce the subcarrier suppression technique that we talked about. That is, can we just notch subcarriers around the mic. Okay? >> George Nychis: So ->>: So as you can see, you'll see the transmission here and just suppressed around. So this is what -- this is our transmission. We've notched it here. We have the remaining transmission here. You can see that they are actually transmitting video. >> George Nychis: Right. So I'll start the actual transmission here. So what we're doing is we're transmitting video from the one white space device to the other white space device. We are pressing frequency around the microphone to not interfere with it [inaudible]. >>: So now you don't hear that [inaudible] test, test. And at the same time the data communication will be the white space devices that's still going on. So essentially we are just suppressing [inaudible] and we can carry out the data communication, video communication. That's HT video. That's taking up a lot of value. But it's [inaudible]. >>: And then notch has been calculated by you or it's been sensed and ->>: So this is what he'll talk about in the next part of the talk. >>: Okay. So, yes, so now we can [inaudible] wants to see the demo, come close and ->>: So break time. [inaudible]. Let's do 15 minutes. And the rest rooms are out the door and down the hall. You'll see a sign. >> George Nychis: Hi, everybody, my name is George Nychis. So as Ranveer introduced me before, I'm a PhD student at Carnegie-Mellon University. And I did a lot of this work while I was here as an intern under Ranveer and working with Ivan, et cetera. So what I'm going to do is present SEISMIC, Spectrum Efficient Interference-free System for MICs. So what Ranveer presented was the studio of RF interference on audio disruption for microphones. And so essentially what we showed was something great. Co-existence can actually happen on the same channel between white space devices and wireless microphones. We can have no disruption on the mic audio and gain up to 97 percent of the channel around the mic. So essentially the results show that only a single condition need to be true. That the interference power needs to be below the squelch tones. If it is no matter what we're going to hear audio disruptions. Great, right? So all we feed to do is know the mic's center frequency and then suppress the amounts of bandwidth that the mic needs. Right? So essentially some of the results that Ranveer showed us, we have these 6 microphones, and this is how much uninterfered with frequency that each microphone needed, right? So center frequency, mic bandwidth and we're home Scott free, right? Unfortunately, it's not that simple. So the amounts of suppression that's actually required all depends. So unfortunately white space devices cannot suppress frequency perfectly. So the reason for this is as they try to suppress frequency in a band there's some amounts of power that is essentially rolled off into the band that they're trying to suppress in. So they essentially cannot suppress perfectly and power's essentially leaked into this area. And so, for example, here, what we did is we present the frequency and power that's the result of trying to suppress frequency from the white space device that we used in the demo, the Adaptrum device. And so what we do is we keep the power constant from transmission and slowly try to suppress more frequency. Now, if you had perfect frequency suppression what would happen is if you suppressed something like 80 kilohertz here it would be suppressed the entire way down, right? But it's not. Unfortunately what happens is that power rolls off into the area at which you're trying to suppress. And so essentially with perfect suppression this would be much easier but unfortunately this is a fundamental problem with trying to suppress frequency with white space devices. So the impact of this is that suppression is going to essentially need to vary. The stronger the white space devices in terms of interference, the more suppression is going to need to happen. And essentially the weaker the mic is at the mic receiver, you're also going to need more suppression. And I'll show this in terms of an illustration why this is. So first let's take this example. So right here what we have is we have our microphone. And we're suppressing a certain amount of frequency such that we're not interfering with the microphone. The power in the band is lower than the power of the squelch tones. Therefore, we're not going to have any audio disruption. But now what happens if the microphone power's actually lower at the mic receiver, right? So what we would do is try to suppress this amount of frequency but that would not work because of the power that's rolling off into the band. So it's not sufficient. And this is essentially changing based on the power of the microphone. And so what would actually need to happen is we would need to suppress more frequency to not interfere with the microphone. So clearly the amount of suppression that we need is going to change based on the mic's signal strength. But now to make the problem harder, let's take this scenario. So we have our microphone power. We're going to fix the microphone power. In this example we are essentially showing that this amount of suppression is sufficient. But now what if the power of the white space device is actually higher? Here it was at 40, roughly 40 dB. Here now we're getting closer to 50 dB. If the white space transmission interference is higher, essentially it's no longer efficient again. What we would feed to do is suppress more frequency. So this is essentially two cases where I've shown where the mic signal power is going to matter in terms of the amounts of frequency suppression needed and so is the white space devices transmission power. So to further show this is what we did is we evaluated how much suppression we needed when we essentially kept the microphone power fixed and we changed the interference power of the white space device. And so here what we have is the amount of frequency suppression on the X axis and on the Y axis we have our normalized PESQ score. And so we have sort of these small gray lines which show for each white space device interference power how much frequency suppression was needed. So, for example, here, when the white space device power was low, we essentially need only roughly a small amount of frequency, like 25 kilohertz. And the reason it's so small is because we're basically just below the squelch tones already. And so as we go on and the interference power becomes greater we need to suppress more frequency, 200 kilohertz, 275 kilohertz, almost up to 350 kilohertz. So what we're doing is we're showing that essentially the amount of suppression that is needed is going to vary. So this is a challenge, right? So it's extremely challenging because essentially the information that we want to know is at the microphone receiver. So if we have a wireless microphone, the wireless microphone's receiver in a white space device there's two pieces of information that we want to know at the white space device. The first one is the mic signal power at the mic receiver. And the second one is the white space device's interference power at the mic refer. Right? So this is essentially going to matter in terms of how much frequency we need to suppress. Right? All this matters at the mic receiver. But unfortunately the mic receiver is completely passive. Is mic receiver transmits nothing, right? And so to make this harder, some things that have been used in the past is something like channel reciprocity. If I know someone's transmission power to me, I can roughly estimate my transmission power to them. But since this mic receiver does not transit, we cannot estimate essentially our interference on the mic receiver. Likewise, we can sort of estimate based on the wireless mic. But that does not tell us what its powers at the mic receiver, right? So we essentially don't know either of these components. And this is what makes this problem extremely challenging. So we can only essentially do proper suppression if we know both components at the white space device. If not, we would unfortunately leave the mic susceptible to audible interference. And we do not want to do that. So what I want to present is an, you know, view of the SEISMIC system. And so essentially based on these challenges, what we're going to do is augment the mic receiver with what we call a MicProtector. And essentially what this MicProtector allows us to do is monitor the key components that allows us to perform the suppression. So right now we picture this as a stand-alone device that you could put on to a legacy mic system. But we envision this type of system to be built into future mic receivers. And so the MicProtector is providing a couple key components. So it's enabling interference detection at the mic receiver. And what it's going to do, which I'll talk about is notify white space devices of impending disruptions to audio. The keyword there is impending. We never, ever want the audio disruption to occur, so it's going to allow us to adapt before it occurs. And so all of this is essentially going to leverage measurements that we did that Ranveer had presented with the very in depth understanding of what exactly causes audible interference on microphones. So what I'm going to do here is present the MicProtector design. So this here is the view of the MicProtector. Think of it as. On the X axis you have frequency and on the Y axis you have amplitude of your signal. So this is sort of I'm giving you the inside view of the MicProtector. And so here we have some mic that is essentially protecting. So the MicProtector implements three key components. The first one is interference detection. So essentially what we need to understand is what the level of interference is from the white space device. And to basic measure this amount of interference what we do is we introduce two very small, just 25 kilohertz bands on each side of the mic. And essentially by doing that, we can measure the amount of interference in these bands without having to try to measure it within the mic's operational band, which is difficult. Because once this mic becomes active, the power in there is going to essentially shift due to the FM modulation. The next thing it's going to do is provide interference protection. So I'm going to get into how exactly the this happens. But what we knew from the mic interference measurements was any power that's below the squelch tones does not cause audible interference. So essentially what we're going to do is introduce a protection threshold below this -- the squelch tones. And we're going to monitor them. Any time the interference is below the squelch tones we're okay. As the interference begins to approach it, then we have a problem, right? And so what we're going to do is monitor essentially the noise in relation to this protection threshold. And I'll talk about how we ensure that it never exceeds this threshold very soon. Then the final component of the MicProtector is the notification of impending interference. And to do this, what we did is we introduced a simple technique called strobing. Essentially what we're going to do is introduce a signal in the control bands that are just outside the mic's band. And once a white space device detects these signals, it knows that it's about to cause audio disruption to a mic. And so these three components are essentially what we build the SEISMIC protocol on top of. And I'll talk about how we can actually do this without ever interfering with the microphone soon. So just a brief overview of strobing. Strobing is just one technique that we came up with to essentially allow notifications to a white space device without adding a lot of complexity to them. So essentially what it does is it -- a strobe conveys that there's impending interference on a microphone. And what we're doing here is we're essentially introducing and removing power in these control bands. And so if you look at it, you could see it as it's similar to something like a Morris code, right, where essentially like strobing a signal. And it's also similar to on/off keying for those of who you are more wireless oriented. And so what we do is we quickly introduce and remove power into these bands. And from this we essentially allow notifications to white space device. And so you can sort of think of it as like carrier sense plus plus. All the white space device is doing is looking for power of certain patterns that match these strobe patterns. Okay. So what's extremely important is the SEISMIC protocol, right? It's how we can actually determine how much to suppress without ever interfering with the mic's audio transmission. So what the MicProtector is doing is essentially monitoring the interference and strobes the white space device if there's impending audio disruption. So again impending is the keyword here. So how do we do this? How do we actually leverage the strobe around it without ever interfering? So what the white space device does is it sends short probes with increasing transmission power. And then any time the white space device says hey, you're getting chose to my protection threshold, it suppresses frequency and then tries again. And I'll give an illustration of how it does this. But essentially what we're doing here is we're exploiting the FM capture effect for any interference below the squelch tones is not going to create audible interference. So we're going to use this as sort of like a way to signal with the mic system. >>: [inaudible]. >> George Nychis: Sure. >>: [inaudible]. >> George Nychis: This is on. >>: On the receiver side of it. >> George Nychis: I'll kind of unplug stuff until it goes away. >>: That's better. >>: That should be off. >>: Thank you. >> George Nychis: Everybody good. Okay. So let me give you a view of what happens from the view of the MicProtector. And then I'll give you the view of what happens from a white space device. So again here we have this view and what's going to happen here is a white space device is essentially going to start probing the channel. It's going to send just at bit of power in a probe packet. And then it's going to listen. Is it going to say okay, am I going to cause impending audio disruption to any microphones? It's going to listen for strobes. If it doesn't, it keeps increasing its transmission power. And so this is going to happen when a white space device first enters the channel. It doesn't know the placement of any microphones, it doesn't know their powers, anything like that. At this point, it's try to converge the proper frequency usage of the channel. So slowly it's going to increase its transmission power until it hits the protection threshold. At this point, the SEISMIC protocol is really going to kick in. And essentially the white space device is going to adapt to the microphone being in the frequency. So to do this at this point with what the MicProtector realizes is hey, if you transit any louder you're going to cause audio disruption to the mic system. So what it does is it notices the white space device of this by using this strobe signal. So what happens with the view of the white space device? And how does it actually converge? So what we have here is time. And we have essentially packets from a white space device and the MicProtector. And so blue packets are going to be these probe packets where it's slowly increasing its power. And then the strobe packets are going to be this -- these responses from the MicProtector. So the white space device enters the channel. It doesn't know where any microphones are. It sends the probe packet at a minimal is transmission power. Hears nothing. It increases its power slightly and probes again. At this point, the MicProtector essentially strobes the white space device and notifies of impending interference. It says if you transit any higher, you're going to interfere with me, right? So what the white space device does is instead of, you know, saying I'm not going to use this channel, it's going to say, hey, what if I suppressed 50 kilohertz. Is that good enough? If it's not, essentially the MicProtector will strobe back and say, nope, still not good. Right? And essentially the white space device is going to react and say how about 100 kilohertz? And at this point, this protocol continues. If 100 kilohertz was okay at that transmission power, it slowly begins to ramp up again, right? And by essentially -- the protocol between the white space device and the mic detector back and forth in this way what the white space device essentially does is it converges to optimal frequency usage in the channel at a maximum transmission power. So we've essentially converged to co-existence of the white space device with the microphone in the channel without ever causing audio disruption. >>: [inaudible]. >> George Nychis: Right. So essentially we have this protection threshold, right, which is the point at which it's going to notify. So we did a pretty in depth study of where this threshold needs to be. The threshold essentially needs to account for several factors. One is the fact that the mic's signal strength is changing over time, right? And so essentially what we want to do is protect that at any point in between one of this. The value can essentially change. And from that, we want to make sure that that protection threshold allows this fluctuation. Right? And so in implementation we end up using a 10 dB separation between the squelch tones and the protection threshold. And we've done a bunch of evaluation to show that even if you separate it more, the amount of loss that you get from frequency reusage is not significantly high. Right. Uh-huh? >>: [inaudible] elapsed time to get to this. >> George Nychis: So in our implementation we end up using software defined radio to do this. We can essentially converge on the order of hundreds of microseconds. If you were actually to implement this in something that could do this process faster in terms of detection of strobes, the time they take to transit, it could go lower. So that's the nicest thing about being able to enter a channel and converge without having to spend a lot of time seeing if it's going to work out. >>: So from a user perspective, it's essentially instantaneous? >> George Nychis: To us. Right. >>: You've been talking so far about one microphone and one white space device. Are you going to talk about multiple ->> George Nychis: So I'm not going to talk in detail about multiple. But this does work for multiple microphones. And we have a formalization if in our paper that shows that this essentially will work for a number of microphones in the channel. And if you think about how this works essentially, what it does is it slowly ramps up its transmission power. 50 kilohertz might work for one mic in this part of the spectrum. Then maybe it needs 100 here. As it starts to ramp up more, then this one's going to yell at it. And it needs to suppress more in this area. And so eventually what you do is you converge to multiple notches in the spectrum. >>: And what about multiple white space device? Didn't you say that the notch is accumulative and as you add more white space the bottom of the notch is accumulative and ->> George Nychis: Right. >>: White space devices. >> George Nychis: Correct. So the problem with this would be let's say that multiple white space devices ramped up at the same time, right? And so this is one of our other collaborators, Thomas, worked on. Which was a formalization which shows that if there are multiple white space devices that are essentially ramping up in the spectrum at one time this accumulation will still not exceed our protection threshold. So essentially we can allow multiple to ramp up without audio disruption. >>: Can I ask a more basic question? >> George Nychis: Sure. >>: If you have one device where it's the bottom have the notch is, what is it minus 30, minus whatever. If you have two devices is the bottom of the notch higher in absolute energy present? >> George Nychis: So that's additive there. So it will be higher. We don't expect it to let's say like all add up excessively because no one white space device is typically going to be transmitting at the same time as another white space device. Typically these do carrier sense, right? >>: [inaudible]. >> George Nychis: Right. Right. So ->>: So if I'm in an auditorium with 500 people and 499 of these devices or 5,000 people and 4,999 of these devices, is the bottom of the trough going to be higher than my microphones? >> George Nychis: No. >>: By definition? >> George Nychis: No. So the reason ->>: [inaudible] simple presence of those things. >> George Nychis: Right so. The reason for this is this convergence, period, let's say that whenever they converge if they're transmitting at the same time, that additive will still be what the MicProtector strobes back against, right? And so it's going to be protecting about the additive interference in there. Thomas is definitely the best person to ask about this, but we have a nice formalization in our paper that shows even if they did all try to ramp up, we still will not exceed that threshold. >>: [inaudible]. So two of them at the same power you are adding 3D. >>: In the best case. >>: That's the best case, right? So you are already taking that as a reference point. [inaudible] it's still whatever it is plus 3D. Individuals power plus 3D. So you can treat it as the same device [inaudible]. >>: Yeah, but if I've got 5,000 of these things ->>: Oh, but during ->>: Turning on at the same time? >>: So on an analog, on the microphone, those would be like continuous time transmission [inaudible]. >>: Microphones ->>: The white space is not like a continuous transmission, it's kind of like a bursty packety -- so even if you have 5,000 white space devices, they're not all on at the same time the way the microphones are on at the same time. >>: These are sending on an FM carrier [inaudible] so when even I have to send a packet, I send it. And the other thing with these devices is if it's Wi-Fi based, if all of us are in the auditorium, only one will send at a time. Everyone else is going to back off. They don't transit anything because the carrier [inaudible]. I sense the medium. If it is free, I send. If there are 5,000 people, well, even then all [inaudible] of each other, only one will send. And [inaudible] you don't expect often to be accurate. >>: So if these devices are telephones or cell phones, only one person can talk at a time? >>: Cell phones are different. They follow a different protocol. This is for unlicensed spectrum or [inaudible]. This is something that is specified. You cannot just go and [inaudible]. >>: Okay. So at the break, I asked what kind of devices these would it spectrum future things that are going to be developed are going to be. And we came up with cell phones and computers. >>: Cell phones as in smart phones. >>: [inaudible]. >>: Not the telephone channel. >>: For browsing. >>: For data ->>: Just like you browse on a Wi-Fi. >>: Smart phone sending data [inaudible]. >>: You could use it for an audio channel, but even then you have small packets. And you don't always send it. Just like you do Skype or something else over your laptop. It's the same [inaudible]. It's using Wi-Fi, but it is not always sending. Even if I'm the only user, the only laptop connected to the access point. It's not always transmitting. It sends a package. >>: I understand. >>: Waits for a while, sends another. >>: But if I have 5,000 of them ->>: Skype won't work. So that [inaudible] won't work. >>: They're going to fog up each other, just like if you have too many Wi-Fi devices in a given area. >>: Okay. >> George Nychis: Question? >>: I take it the strobing is carrying the center channel information, so you're not actually trying to sense where the center channel ->> George Nychis: Right. The strobing is actually acute in that sense. It's something that I want to bring up. But -- I forgot to mention real quick. But the nice thing about these control bands is they're on both sides of the microphone. And once you detect these two strobe signals, you know the mic center frequency as the medium point between these signals and you know the mic's bandwidth essentially by the distance between these signals. So it's basically learned from the location of the strobe signals. >>: [inaudible] white space device is looking -- finding the frequency of the strobe as well. So you don't necessarily even have to go by 50 kilohertz increments, you could start at ->> George Nychis: Yeah, you could start at something bigger to converge more ->>: Based on where the strobe ->> George Nychis: Right. Could you do that. Uh-huh? >>: Yeah. I live in an area where there's probably like a dozen different networks and they're all Wi-Fi and you can see them all when you're trying to connect to your own network. And they're all in the same frequency pretty much and they're not conflicting with one other. But you said that this type is one of these at a time? Why not somehow negotiate like Wi-Fi does? >> George Nychis: You mean one MicProtector or -- maybe I'm not understanding the question. >>: You said something about only one white space device transmitting at a time. >> George Nychis: Oh, so what we're saying though is even though there are any number of networks that you see there, at a given point in time what happens is that a device carrier senses for the channel, says okay, there's N number of us, but only one of us can actually successfully transit at a time because we're using the shared medium. So what happens is I transit, then you transit, then I transit, then you transit. So even though there's two of us, actually at one given time, only one of us is transmitting. Yeah. And so we essentially alternate using the medium. >>: User packets. >> George Nychis: User packets. Right. >>: [inaudible]. >> George Nychis: Correct. And so the downside is if there's a thousand then you have to alternate between all of you. So that's why when there's a lot of networks it's worse because you have to share with that number of networks. Okay. Fast forward here again. Okay. So just a quick summary of SEISMIC design. So what we did is we introduced the MicProtector which monitored the key components to understand how much frequency we need to suppress. So, again, we pictured this is this is stand alone device but this could definitely be built into future mic receivers. So the white space device and the MicProtector essentially engage in the SEISMIC protocol. And what we showed is that by doing this, we essentially ramp up to an optimal suppression and maximum transmission power without ever creating audible transference on the microphone. And so I guess we've talked about this a couple times now, but you can see the white paper that we have and the upcoming CONEX 2011 conference which shows that we can actually perform this with multiple microphones without ever creating audible interference for them. Question? >>: What happens when there are so many microphones in use that there's no spectrum left? >> George Nychis: So if tears a lot of microphones in the channel, I mean, this is the decision that the white space device will essentially need to make. It will say, you know, using this channel at this point is not worth it to me maybe. You know, maybe if there's 25, 50 microphones it might say this channel is simply too busy. And the amount of benefit I'm getting from that channel is not significant enough. Right. >>: So that means the white space device will monitor function, it will send a message saying we cannot operate in this space at this time? >> George Nychis: In that channel. It could use a different channel instead. Right. So some of the work that Ranveer did on networking over the white spaces looks at we have all these spectrum fragments and essentially they came up with a metric that said with all these fragments, let's pick fragments that are simply, you know, the best to use in terms of bandwidth. So for this, you might decide because there's so many microphones we're not going to use a specific fragment of the spectrum. >>: 39 percent of the urban areas that have all wireless microphones ->> George Nychis: Yeah. So we'll show some results with many mic scenarios. The nice thing is that mics do exhibit FM. So in some affairs where we evaluated we said that you didn't even need to suppress anything. And the reason for that is because your interference is below that FM capture level. So there is that nice property of FM microphones. So what we do is we evaluated SEISMIC. So we built a full prototype of the MicProtector and a SEISMIC enabled white space device. And we evaluated real world scenarios. So for example a mobile microphone, 20 to 30 feet of the mic receiver like something within this room, and that's like somebody taking the microphone, walking around. Maybe we're at a concert, we're swinging it around, we're excited, right? And then we also evaluated it under a mobile low microphone scenario. So this is something like the distance between the mic and the mic receiver is great and the microphone is operating at a relatively low power, which is challenging. Because we need to make sure that we never interfere with the microphone in that scenario. So the key thing here is that what we're evaluating is SEISMIC must properly adapt. So mic powers fluctuate. So I just gave three simple scenarios here that we measured. So the first one is a stable microphone. And on the X axis here we have time, on the Y exists we have the amplitude of the squelch tones. It could also be something like the amplitude of the mic signal. But when the mic is stable, let's say if it's on like a -- you know, like a podium, essentially that signal is stable. But now what happens in the second plot here when human interference comes in? It's table but now I'm sort of walking around, right. And so by doing that, I'm essentially fluctuating the microphone signal. You know, in the third one here, I showed having the microphone in my hand and essentially walking around, picking it up to my mouth, dropping it down to my side, et cetera, turning my body. And by doing that, that mic signal can fluctuate a lot relatively quickly. Right? And so essentially SEISMIC must adapt and never interfere with any of these scenarios. >>: Have you tried this, say, under circumstances maybe you can just say the Fifth Avenue theater or the Paramount, you know, where they have a fair amount of people who are actually moving, you know what I'm talking about, the real world. >> George Nychis: Right. >>: How did it turn out. >> George Nychis: We did not get to evaluate under these scenarios yet. We've been actually in talk with James Stepho [phonetic] about taking our system and seeing how it works at the Latin Grammy's and he invited to some other events where essentially they have a lot of microphones, and they want us to see how our performs in these scenarios. >>: [inaudible] criteria is how many people doing what. >> George Nychis: Yeah, yeah. Right. And so essentially we want to evaluate this under these scenarios. We have not gotten the opportunity yet. So what we've done is we evaluated under single mic scenarios, multiple relatively smaller scenarios. Right. So ->>: And when you do that, you're going to have to put one of these interference detectors on every microphone. >> George Nychis: So what we do is we require one for an area. Like so let's say, you know, here what we have is we have a rack inside, which has multiple mic receivers for the mics in here. Essentially what we only require is one MicProtector for that area. Because essentially all it needs to do is accurately estimate the interference of the white space device and the mic signal. And by putting it near that rack, it essentially can do that for all of the receivers. >>: As long as [inaudible]. >> George Nychis: Right. It needs to know the frequencies of the receivers to essentially protect. Right. >>: So it's not necessarily built in to every receiver? >> George Nychis: It does not need to be, right. So what we've done is we've built a stand alone prototype that could protect something like a rack. In the future it could be built into every mic receiver and therefore, you know, you want need one to ->>: So you would not need to buy all new receivers when this goes into effect, you could ->> George Nychis: Exactly. Right. This is a key thing that we wanted to essentially enable. We don't want to change the legacy mic receivers in any way. We want to allow essentially the mic industry to operate, and we want to provide protection for them while we're using the channel. >>: [inaudible]. [laughter]. >> George Nychis: Yeah. I mean ->>: [inaudible] 50 bucks or 500. >> George Nychis: It's a good question. Right. It's nothing that we've really been able to evaluate so far in terms of complexity of the device. The complexity is not high, right? Like for example we developed the strobing to do basic power generation, things like that. And essentially all it needs to do is monitor interference power in an area, right? >>: [inaudible] it's already monitoring [inaudible] figure out the best channel it should be on. The transistor is a very cheap transmit. Strobing is using [inaudible] which is like very simple to start doing [inaudible] so that way it's relatively cheap. Because it's already doing the interference detection. All it needs to ->>: Only one. The mic receiver is only doing it when you tell it to. >>: But our way it won't cost more to do it multiple times as opposed to one. The hardware is already there. The transmitter neither to be added. >>: It depends how many you want to manufacture. [inaudible]. >>: So can you predict about how long it will be before you actually test it in out in these real world environments? >> George Nychis: Yeah. So we were invited in November to evaluate this at the Latin Grammy's. So that's something that we definitely want to work with James on doing in evaluating there. >>: Next month. >> George Nychis: Yeah. So it's within a month from now. Something that we want to do. And, you know, he does various events. And so it would be great to evaluate this under these scenarios. >>: So the reason you're not giving the timeframe is we're swamped with different things. And we do need -- this is something that James wants to take [inaudible] to. So we are working with him. If there's an event in Seattle or Pittsburgh, makes it easier because if it's in Seattle I can do it here and George can do it in Pittsburgh. >> George Nychis: Right. Right. >>: So we are looking forward to the right opportunity to [inaudible]. This year's company meeting, the [inaudible] company meeting [inaudible] opportunity he comes with us there. But I wasn't in town at that time. But we'll do it soon. >>: Do you have contacts at say the Fifth Avenue or the Paramount [inaudible]. >>: No. >>: Would you want some? >>: Yeah. >> George Nychis: Yeah. >>: Okay. We can do that. >>: If you have friends there. Because we don't want it to be a public event. It's just research. >>: Right. Well, knowing people there you could get in on and off day and fool around ->>: Yeah. That would be nice. >>: Okay. We can make that happen. >>: I have a question. >> George Nychis: Sure. >>: About the [inaudible] what is the property of the microphones that determines the amount of frequency that has to be suppressed? >> George Nychis: Okay. Well essentially what the operational band of the microphone is. >>: No, but I mean what is it about -- like you said different microphones need to have different -- for different microphones to prevent interference you have to suppress different bands -- bandwidths of frequency. >> George Nychis: Right. >>: What you said about the microphone determines, is it something about the transducer, or is it something about a property of the microphone? What is it that determines ->> George Nychis: So I'm not a mic expert but my intuition/knowledge would be something that would tell me it would be something like the frequency range of the microphone. So essentially what you're doing is you're doing FM modulation where, you know, based on the frequency input you're shifting the frequency of your carrier. So it could be something like ->>: [inaudible]. This question is what is it about have you like looked at what the properties are in the different microphones and what [inaudible]. >> George Nychis: Right. [brief talking over]. >>: [inaudible] so each one of [inaudible] you see the sensitivity [inaudible] depends on the S and R at which they can separate. So that's divided -- the more bandwidth you suppress, the less noise we cause. And each receiver -- I think each one has their own audit component. And whatever sensitivity that they have determines the amount of the bandwidth that they have [inaudible]. >>: Yeah. But at the [inaudible]. >>: [inaudible]. >>: How would that relate -- that's different from RF. [inaudible]. [brief talking over]. >>: In the paper that was recommended that we read before this, which was white space devices performing in a Wi-Fi, like a Wi-Fi -- I don't remember the title. But there was a cute trick in there that when the white space devices decide on an initial channel that they're going to transit, at that time they create a secondary channel to jump to should the wireless mic click on and the primary. Is that still operative in this ->>: So that is a separate paper. That is about -- that wasn't about interference, that was about if you have -- if I have a link, a link gets congested or on blocked, how do I recover [inaudible]. So this [inaudible] backup channel is being considered can as part of the standard Wi-Fi standard that it [inaudible]. Yeah. But we are talking about that in this paper. But, yeah, that is ->>: But that would be incorporated into ->>: If you're maybe going to discuss this yet but what happens if mic gets turned on you're going to have a moment of time. How often does the white space device look for the strobe, et cetera? >> George Nychis: Right, right. That's a very good question. So essentially I'm not going to talk about it in here, so I'll just -- I'll talk about it right now. We have it in the paper. But essentially when a Microsoft turns on the spectrum this is an immediate cause for concern, right? Because all the white space device have converged at some point that is ignoring the presence of this microphone. So in a microphone first centers a channel, what the MicProtector does is it sends a special strobe that essentially says hey I'm a new mic in the channel and because of that, that needs to essentially kick the white space devices back into converge mode. So whenever a new mic enters, all the white space devices converge again. And that convergence accounts for that microphone. So we consider that like a special event, a microphone entering. When a microphone leaves -- so what we do in items of the protocol is periodically we reprobe for frequency. We essentially say okay, can we add a little bit of frequency back? It will strobe at us if we cannot. If so, then we keep adding a little bit more. Essentially if a mic leaves over time we will essentially regain that spectrum. >>: [inaudible] with one mic but say several mics come in and go out. >> George Nychis: So mics coming in, it will not be audible. Mics going out will also not be audible. And the reason is that once we've converged to this point, you know, we've essentially stepped up to using that frequency. If we want to use that frequency again, we step down and try to essentially put a piece of that spectrum back in. Because we've already verified that that previous step works, right? That's how we got there. Right? Things will fluctuate over time. But if it does, we essentially have this protection threshold which will basically yell at us if we need to back off more. >>: It will work as long as the incoming microphone isn't offstage, backwards, really low and level, your MicProtector is able to even tell that it's there trying to come on. >> George Nychis: So in items of that, the MicProtector needs to essentially know that the mic is turning on. >>: Right. >> George Nychis: So the MicProtector is near the mic receiver. If your mic receiver can't hear your mic, then that's separate problem, right? But essentially that MicProtector will know that that microphone is turning on. >>: So it will -- it will start strobing, I mean, as soon as the oral [inaudible] as soon as the channel kicks on. >>: As soon as it detects the carrier wave. >> George Nychis: Right. >>: As long as the white space device isn't loud enough that it's swamping the carrier wave. >> George Nychis: Yeah. Another question? >>: What about the case where you have a stereo pair of microphones that is now being dynamically changed when another mic comes in or comes out? Because you're rejigging the system of you've got a left and right pair of mics. I don't know if you actually use this in practice. Someone in the industry can tell me if it's not done in practice, then ->>: Actually the -- I remember a case [inaudible] theater where there were three pairs, three pairs of 451s like this, across the stage where that was dynamic [inaudible]. [brief talking over]. >>: Wireless? >>: I'm sorry. You're -- [inaudible]. >>: One of them was an individual microphone, so multiple microphones. [brief talking over]. >>: [inaudible] now your image is wandering around. >>: [inaudible]. >>: [inaudible] that's not in the RF ->>: Yeah, that doesn't relate to this. >>: Isn't it important to differentiate between the receiver location and the antenna location? >> George Nychis: Receiver and antenna location? >>: Because the antennas are not necessarily located at the receivers. >> George Nychis: Yeah, yeah, right. That's actually an interesting point. So how we've essentially evaluated these receives are the antennas are essentially connected to ->>: Well, they're going to be connected, but they're not going to be ->> George Nychis: Right. So the way that you can handle that, which is what we've done with some of these systems, is essentially the MicProtector is basically bridged that the system via that same input, right? And so from that, you have the same view as the antenna, rather than the MicProtector. >>: So they don't have their own antennas, they're using the [inaudible]. >> George Nychis: Right. Right. >>: Okay. Except that -- that assumes only one set of antennas for the pile of receivers. If you have individual antennas on receivers with ->> George Nychis: Right. So in that case. >>: And their different frequency. In that case ->>: And they're significantly different locations of antennas. I mean, usually when you have a couple stacks you're going to put them in racks next to each other so you're virtually in the same space. >> George Nychis: Yeah. So maybe in that case you need an additionally MicProtector if they're significantly separated. But I mean it's really good to hear what your use cases are because we don't use these systems as much as you do, obviously. This is actually really insightful. >>: The different frequency bands. >>: I'm curious what kind of data's being transmitted on this white space. Are you able to take multichannel audio and send it out? >> George Nychis: So essentially, you know, whatsoever these white space devices is some data transmission, right? I mean, it can be essentially what your laptops do today in terms of, you know, either caring video, caring Skype calls, basically any sort of data communication. >>: So it [inaudible] a while, a week ago, they talked about Ethernet proprietary systems for transmitting broadband audio among devices. Is that the quality of transmission you'll be able to do with this system? >> George Nychis: I'm not too familiar with that ->>: The thing most Ethernet based audio transports, the main thing is they can't tolerate other devices trying to talk on the same network. And -- well, they can do that a little bit but they can't do a lot of it. And so this is a public unlicensed where anybody else might try to use the same channel, so to say. And so I think the network traffic would become too congested to allow that to happen efficiently. But if you're able to isolate and say this is my network channel, then you would be able to ->>: There were multiple attempts to do a high priority, high priority packages for audio and video. And so for it didn't work. Otherwise, yes, in the perfect case, this is the white space spectrum. Your microphone is white space device and everybody has the same protocol, everything is nice and clean unless 500 people take out their cell phones and try to browse Internet it gets congested. So for now there is no way to say okay, I now do a device, my package says high priority, give me certain priority. Ism could if it was not in realtime, you were just transmitting data from say a live show to a studio that they were going to master that particular show of less important, you listen to it in realtime. Is that -- would the system work with that? >>: If you record the file and then download the file, sure. What I'm hearing is anything you're doing over Wi-Fi now you would do over this. >>: Actually I thought he said that Wi-Fi had a bigger bandwidth or whatever. This is [inaudible] same stuff? >>: It's motivatable spectrum if you think of it, the bandwidth you can do similar things in ->>: So my question is what if, you know, we don't have this mic, this illustrious MicProtector that no longer doesn't exist yet and might in 20 years. >>: Right. >>: But you [inaudible] and it doesn't have any kind of indication. I mean, you of course mess up the other mics that are already running, but is yourself negotiated? >> George Nychis: Yeah. I will definitely talk about that. >>: Good. >> George Nychis: So I have I think two evaluation slides and I'll talk exactly about deployment. So there's a couple interesting cases of deployment. So for evaluation, what we did here is we had the mobile microphone within 20 to 30 feet and here on the X axis we have time, on the Y axis we have white space device spectrum. So this is how much of a single channel it's essentially using around the microphone. And essentially shows how it adapts. And so what we did through evaluation is we verified that through this entire process we never credit audio disruption on the microphone system. And as you can see over time it is using significant amount of the channel without ever actually interfering with that mic, right? And so the mobility of the mic here really did not hinder our ability to use that channel. We do adapt over time as the microphone signal adapts. But in doing so, we do not interfere with the mic. And so this was a more challenging scenario. In this case we had a low power microphone. And what we did is moved around with the microphone, et cetera, and we allowed the SEISMIC enabled white space device to essentially use the spectrum. So in this top graph here what we show here is the power of the microphone. This was recorded at the same time that we did the actual experimentation. And what we do is we plot the amount of frequency we use with it. And so over time since this microphone is fluctuating. And what we do here is we have something that's like a lower bound on our threshold here where this microphone essentially kicks itself into low power mode it's essentially the MicProtector says hey the microphone power at the mic receiver is so allow that we are basically goes to introduce a low power mode. Where any sort of ramp ups from white space device or anything could possibly cause interference at the microphone. And we do not want that. And so in these cases, the white space device essentially has to vacate the channel. And so this is to provide -- it's essentially that extreme protection over the microphone, right? And so over time what we show is that the white space device adapts frequency, it uses the channel. When the microphone power is significantly low enough, we choose protection over the microphone and essentially vacate the channel. >>: [inaudible]. >> George Nychis: It's opportunistic spectrum access. So we want to use it while it's available. So what we did for an evaluation with many microphones. So we haven't got a chance to test this in a live environment. But what we did do is we were able to talk to some coordinators from various events and receive microphone data. And this is essentially where microphones are placed in the frequency domain. And from this essentially drive a simulation in which we can essentially use the real microphone placement from the 2008 NBA all-star-game, 2010 BCS Championship Bowl and the 2010 Worldwide Partner Conference. So this has a various number of mics between these events. And so what we do is we essentially model signal strengths of mics to mic receivers, model interference power of white space device and essentially simulate a thousand SEISMIC enabled white space device at each event and ask the question if a white space device was at these events, essentially how much spectrum could it use based on all of these microphones? Okay. So what we do is we present the evaluation. And we compare it to essentially three different things. So again, we knew the mic placement of each of the mics across the events. And what we have here in the red is if we follow the FCC ruling and essentially vacated a channel because a microphone is there, how much spectrum we actually end up having. And so on the Y axis we have available spectrum in items of megahertz. And based on that, this first bullet point here shows that if we follow the FCC regulation essentially at the NBA event we would have one free channel. At the BCS Bowl, 7 free channels. And at the Worldwide Partner Conference 17 free channels. Now, what we do is we compare it to if your white space device was SEISMIC enabled along with the microphones. And if you look at essentially these green bars here, this is how much the average SEISMIC enabled white space device would be able to achieve in terms of spectrum availability. And so this is essentially achieved by suppressing frequency around the microphones, leveraging the fact that the interference from the white space device isn't always going to require suppression because it's below essentially the squelch tones. You're not going to interfere with every single microphone, right? And so we're taking into consideration essentially spatial awareness. And then essentially what we did is we also present the optimal suppression. So let's say subcarrier suppression was perfect, we could suppress exactly within the frequency band. We're actually pretty close to that value. And then so final what we did is we said okay, we have a thousand white space devices. Essentially what amount of available spectrum do these devices have? So for example at the NBA event, 50 percent of these clients had at least a a hundred and 30 megahertz and only five percent of these had less than 50 megahertz. These are essentially clients that would be either close to mic receivers, essentially have a lot of microphones near them, et cetera. So likewise the BCS Bowl, 80 percent of clients had more than 130 megahertz and WPC 90 percent had more than 145 megahertz. Sure? >>: [inaudible]. >> George Nychis: So data rate. So that's all going to essentially depend on the modulation you use, something -- so if you compare it to something like AO2.11 G, so -- or even N. Let's say you get 128 megabits per second over let's say a 20 megahertz channel or something like that. So you could essentially divide this by the number of channels you would achieve. And then essentially multiple that data rate by your spectrum. Right? >>: [inaudible]. >> George Nychis: Okay. >>: So [inaudible] two by two. You have a [inaudible]. >>: [inaudible] megabits per second. >>: Yeah. And how many white space devices does that translate into? How many can use that at the same time? >> George Nychis: So again, this all operates by I transit and you transit. Essentially we carrier sense. So based on that, you know, what you end up doing is you do a fair share because you have to alternate between transmitting. That essentially is a fraction of the number of devices in that spectrum. >>: What fraction? >> George Nychis: So if you had, you know, let's say 100 devices in the spectrum, that fraction of air time is essentially divided by a hundred, a little bit additional loss due to what's called backoff. We need to, you know, listen and then transit, et cetera. So it's relative to the number of actual devices. >>: So with these -- this amount of available space how many devices can fit into each one? >>: How many it's just how fast can they work? >> George Nychis: Yeah, exactly. >>: You can pack ->>: How slow are you willing to go? So at full speed how many will work? >>: [inaudible]. >>: So Wi-Fi uses 20 megahertz mostly. >>: Yeah. >>: So if you had -- you could fit in four Wi-Fi networks ->>: So [inaudible] one megabit per second that's [inaudible]. >>: If everybody's trying to stream. >>: Is everybody [inaudible] transmission. >>: Which is quite good, but it's still [inaudible] the fact that most of them will be in the pockets of their owners. So if a thousand can do one megabit per second, then 20 -- 20 megabits per second is the maximum possible is that what you said? >>: [inaudible] megabits per second [inaudible] depends on ->>: There's going to be a direct mathematical relationship there between [inaudible]. [brief talking over]. >>: They share the bandwidth. You have 150 megahertz [inaudible] kilohertz all [inaudible] you are on 0.7 -- 0.6 megabits per second. >>: So at the NBA game which is at a 15,000 seat arena you can't have 15,000 [inaudible]. >>: But not everyone will be using it. [brief talking over] [laughter]. >> George Nychis: That's how it works now. >>: [inaudible]. >>: But those 15,000 take out their devices [inaudible] with those microphones. [laughter]. >>: But if the usability of those new devices is so poor that nobody will want it, that takes care of the problem. [laughter]. >>: Yeah. >>: That's why we had the first slide telling you -- [laughter]. >> George Nychis: So this was a question that came up. Deploying SEISMIC. This is actually a great question. How do you actually deploy SEISMIC, right? So essentially let's take the scenario that was given. So some microphone has the MicProtector, some microphone does not. If the white space device is everybody has a MicProtector, it's going to start to ramp up. The one's going to strobe at it, the other one's not. And it's essentially going to trample that other microphone, right? So how do we actually partially deploy SEISMIC? So the first thing that we can do to deploy this optimal SEISMIC protocol, talk about another way of doing deployment, is that the mic has to report the database if it has a MicProtector prepared with it. And so by doing this essentially what a white space device will do is when it enters a channel, it checks the database. If every microphone in that channel has a MicProtector, it can perform the SEISMIC protocol. If any microphone does not, it cannot, and it has to vacate that channel to essentially protect that one microphone. So this would essentially be a way of partially deploying SEISMIC. And so essentially by doing that, you would provide protection for the non-SEISMIC enabled mic systems and still allow convergence to using the spectrum around those who do. >>: [inaudible] microphone system? Not a -- it's not going to get in time -- it's not going to get in the database if I put it in tonight. >> George Nychis: No, it will. >>: It's not going to get in the database. >> George Nychis: It definitely will. The center in that days, then essentially the white space device is going to can check that database before transmitting. >>: So if I turn on the microphone tonight it goes into the database tonight. >> George Nychis: Right. >>: And the device can see it tonight? >> George Nychis: Right. Right. >>: And under the database, you're purporting not necessarily the FCC's mandated database? >> George Nychis: Correct. >>: Oh, so you've got to differentiate there between those two databases. Okay. So there's a local database that it's consulting and there's a ->>: No, so I did it before. So with SEISMIC you would expect the FCC database to be [inaudible]. >> George Nychis: Essentially support this [inaudible]. >>: The FCC right now has the mic data but that changes. For example right now it requires white space device to consult with the database once every 24 hours or if it moves for that [inaudible]. >>: Right. But the database, you have to tell that database 30 days in advance of when you're going to use it that you're going to use it. >>: That's the FCC. But you're saying. [brief talking over]. >>: So if this is accepted, there will be two ->> George Nychis: No. It will be one but it will be augmented with ->>: So this would be a way in which existing mics can continue to use other channels beyond the two channels, when what they do is they can populate their entry and then white space devices will talk the SEISMIC protocol beyond that. >> George Nychis: So there's one more sort of deployment scenario which I'll just summarize is an easier to deploy. It's less optimal but also less complex. So essentially you -- what this deployment could do is not use some of the more complex size of SEISMIC like this power ramping, right? So essentially what we need to do when we enter a channel is we simply power ramp, right in. And essentially we could remove some of the interference detection at the mic system. So if you remember, there were essentially two components that we needed to know to do suppression. One was the mic signal at the mic receiver and one was our interference level at the mic receiver. So there's actually another way to build sort of a different flavor of SEISMIC. And the way to do this is essentially the following: So remember there are the two key things needed. But what we can do is the following. So to learn the mic signal power, what we can actually do is use the database to update essentially the entry to the microphone to include the mic signal power, where there's essentially a range of signal powers over time, right, essentially some average signal power, some lower bound, et cetera, so that we know approximately how this mic is operating. But then also we need to know essentially the amount of white space device interference on the mic receiver. So something else that you could do here is enable essentially the mic receiver as a beaconer. And essentially if the mic receiver is period which I beaconing, transmitting, et cetera, what you can do is leverage that to estimate your interference by using channel reciprocity. So I can say the approximate signal strength of the mic receiver to me is X, therefore I can estimate based on that approximately what my interference power on the microphone receiver would be. And so by doing this, you could essentially use these two values to use a more conservative suppression to avoid interference at the microphone, but possibly remove some of the complexity of SEISMIC. And this could be another possible deployment scenario. So sort of in this optimum SEISMIC protocol deployment we achieve the highest available spectrum efficiency. This is real a scientific contribution of our work. We require a little more complexity at the white space device in mic and a number of modifications are essentially needed for deployment. The beacon er and database approach could essentially enable less complex SEISMIC protocol. It's essentially easier to deploy. Fewer changes are needed. But you don't achieve this optimal suppression because you have to essentially take more conservative suppression values to avoid interference with the microphone. But it's a way to possibly approximate it. So in summary from our talk what we've shown the most important thing here is that microphones and white space devices can co-exist on a TV channel. There's really no need to essentially have these extremely conservative rules that essentially say we cannot co-exist on a channel, we need to essentially reserve channels, et cetera. So we proposed SEISMIC system which required no changes to existing mic systems. So your mic systems can operate as is. You can augment it with the MicProtector. And it could be possibly built into future mic systems. And so by doing so we essentially ensure maximal spectrum reuse of white space devices of the spectrum surrounding the microphone. So that concludes my talk. I'd like to thank everybody for attending. Thank Ranveer for talking, Ivan for hosting, everybody else for basically making this happen and if you have any other questions, the floor is yours. >>: Is there enough difference in the complexity between the full system and the light system that the manufacturers may just say well, the light system's good enough and it's a whole lot cheaper, we're just going to go that way and forget the complex system? >> George Nychis: It could be in the sense that you avoid doing this power ramping by entering the spectrum, right? And so maybe they say that doing this is a little more complex. Maybe there's some error in doing it, something like that, where maybe the spectrum you gain is maybe not as beneficial to just doing the basic protocol. >>: It's all part of [inaudible] degree of extra complexity is the full system twice as complex or just a few percent [inaudible]. Is it going to cost double as much to implement the full system? >>: [inaudible] but it doesn't [inaudible] anymore. It's constantly just speaking. >> George Nychis: Right. >>: So it's very trivial. It's trivial to prevent but it is reconnected to the Internet. That is an assumption which we don't know how feasible that is. At all events is it easy to connect the microphone of [inaudible] and what's the added cost? Interest and backing up way on this, the frequencies that you're talking about this affecting is it correct that that I not the two channels above and below channel 37 that are [inaudible] for microphone use. So right now you cannot use those two channels for data transmission. It's reserved for microphone use. And you're not suggesting using those channels for this. >>: Could this be more [inaudible] that is operate anywhere white space [inaudible] just configure around it. >>: Their goal is to change the policy [inaudible]. >>: Okay. So that's the [inaudible]. >>: Okay. >>: [inaudible] is that as long as you don't overpower the squelch times and you don't interfere with the [inaudible] under which channel model is that ->> George Nychis: Right. So this is essentially leveraging capture where if you think about how much [inaudible] you are with FM modulation and demodulation essentially what you're doing is you're doing frequency variations, right? So you have some tone and you're varying the frequency in which it operates. >>: I'm referring to the channel model of the wireless channel. I wasn't asking [inaudible]. >>: This is purely the receiver's view. Irrespective of the channel. You are just looking at what is being received at the receiver and what is the impact on the -so the way we measure it is we are taking an input from an antenna, putting it through the splitter. One part is going to the receiver, the other part's going to the spectrum. So we are getting the exact view that the receiver is seeing. And from that inferring when do we get to [inaudible]. >>: Right. >>: So it's fair to say that that's good enough. So in SEISMIC you said a power threshold. So you detect the [inaudible] if it's [inaudible] if it is -- if it exceeds that threshold then you start beaconing. >>: Correct. >>: How do you adjust the power of your beacons to accommodate the effects of the wireless channel? So, for example, what happens in the white noise device -- what if that sendoff just happens to be if the fade where it can't actually hear your beacon signals? >> George Nychis: Yeah. No, that's a valid question. So that's in terms of being able to detect the strobe signals, right? >>: That's right. >> George Nychis: And right. So ->>: And you'll be able to hear its own destination. >>: Right. But you have to understand that the wireless channel is not a bi-directional channel, it's not a symmetric bi-directional channel, but it's the nature of wireless channel. >>: But the assumption is that [inaudible] not really [inaudible] seconds. Most of the [inaudible] it's extreme long periods of [inaudible] a few seconds, yeah, that's something that [inaudible] many. >>: It could be a shadow. >>: It could be the shadowing case is is the -- this is you are saying the white space device ->>: That's right. >>: [inaudible]. >>: So how do we ignore that the beacon we're sending out is strong enough to overcome any shadows that ->>: [inaudible] just relying on the multipath [inaudible] that is anything we send either direct or through multipath gets through the white space. And the other thing is the strobes are sent at much higher power than we expect this white protocol white space devices. So the FCC specifies [inaudible] protocol devices. These strobes are [inaudible] so those are ways [inaudible]. We haven't seen a case where -- because of this [inaudible] you would be countering most of this. >>: You say that no change is needed to existing mics but you that's really only under the database model. I mean, strictly without some sort of augmentation with the protector. >> George Nychis: Right. So what we're saying is that you don't feed to make hardware modifications to the mic system. And so -- >>: [inaudible]. MicProtector. >>: You do need to augment the system in some way. >> George Nychis: Correct. >>: Okay. On the database paradigm, would it be reasonable to build up a database different microphone manufacture models would make it incumbent upon the wireless manufacturers to provide that data so that a user, when entering their microphone, isn't supplying saying I want this channel but I have this microphone model on this frequency and then that way you could at least get down to ->> George Nychis: Yeah. >>: Knowing that I only need to take out 200 kilohertz or whatever ->> George Nychis: Right. Right. Essentially by providing that information we could know -- we could have a better estimate of how much we need to suppress, to be less conservative. Right. >>: We could -- then with a database only model, you would at least be able to recapture more of the spectrum. >> George Nychis: Right. >>: The main thing is our company we deal with a lot of houses of worship the average house of worship doesn't even know there's a database yet. And it would be a stretch to get them to even look at their microphone and figure out what the frequency is. But maybe we could get them to say okay, yes, I have this model XYZ, and it's -- you I see this 247 megahertz number on it and we can get them to do that at least, to help with that. >>: So I'm not sure I understood the answer to the question you gave. I'm concerned about that the well like you saying houses of worship, karaoke, the bar next door to your building, you know, they're going to have a concert tonight and they turn on their wireless microphone. And you said something about the microphone has to notify in whatever it is, it didn't do what it's supposed to be doing, it has to vacate the channel so the bar's not going to be able to use it, they're going to have to find another channel. I'm confused about ->> George Nychis: Yeah, sure. >>: This must thing sounds like it's going to, you know, actually have to change a lot of the infrastructure of what's being used commonly around the city in the way of wireless microphones. Eventually. Once you guys get control over all these channels. >> George Nychis: What was the must around again? >>: I don't know [inaudible] actually on the slide it said that the microphone must do this and then you said that they -- if it doesn't do this, then it must vacate, it can't use that channel. A new microphone coming ->>: Not the microphone, the white space device. >> George Nychis: Yeah. >>: White space device? [brief talking over]. >>: I got it backwards. >>: Yeah. The other way. >>: [brief talking over]. >>: [inaudible] it's going to be looking for new microphones all the time even if they don't have a MicProtector; correct? >> George Nychis: So if they do not have a MicProtector in this partial deployment situation, that mic needs to basically notify the database that it's a mic in the channel that does not have a MicProtector. It does not follow SEISMIC. >>: That's what I'm not understanding. Is some guy at the bar isn't going to be notifying anything of anything, he's just going to turn on his karaoke system. >> George Nychis: By doing this, right, what the FCC is mandating is this mic has to update the database, right? So it's basically part of the rules, right, where right now essentially if a mic's operating, it needs to update the database of where it's operating in to have the protection. >>: Okay. But you guys are going to assume that there are going to be a lot of people who aren't going to follow that, and they're just going to turn on their microphone and do whatever the heck they want. >>: No, no. >>: So you're data system has got to negotiate or figure that out so you don't lose data or do something weird, right? >>: No. So Ivan [inaudible] frequencies. We are returning the frequencies to the audio base, right. You wouldn't be able to operate your microphone anywhere on any channel. It would be illegal if you started to operate it on any other channel other than the two channels. >>: Right. >>: So in order to enable that, what we are suggesting is we will keep the systems as they are, don't require -- don't [inaudible] mic people again, give this extra box. You just plug in next to your mic receiver and your system starts working as is. There are two devices. It's on them to adapt and adapt and make sure that they not using the [inaudible]. >>: Right. But to protect yourselves you have to protect your data, so you should be constantly checking just to make sure, right? >>: So the way we protect our data -- yeah, so this extra device that you put in is the one that's going to update the database. It's the one that send a signal to the white space device, hey move out of the streets. >>: Even if somebody didn't offensive the database, they just turn on their mic. >>: We have a disagreement on that. The two channels above and below 37 are open to wireless mics, period. Reserved for wireless mics. And any other space below 700 is open to wireless mics as long as it doesn't -- up to 700 ->>: [inaudible]. >>: Below 700. >>: 470 to 698. We can go any -[brief talking over]. >>: [inaudible]. >>: [inaudible]. >>: [inaudible]. >>: [inaudible] at which point it's as good as unusable. >>: Interference from whatever source. >>: From white space devices. They could come [inaudible]. >>: Which, well they don't exist yet. But when they do, yes. >>: It will become unusable. >>: Except that your white space devices are mandated to avoid existing uses in those -- >>: No, no. In those frequencies right now with the latest ruling they are required to sense at all. You're not sensing for wireless mics. You can -- white space devices can operate on that frequency code right over the mic. Without even listening to it. >>: It's up to us to ->>: It's up to us to move into the reserve channels. And the whole purpose of the database at the moment is if you know that on December 19th you're going to need let's say 36 channels and whatever you have [inaudible] 20 channels on the reserve channels and you have to reserve the extra space and that's when these devices are supposed to query the database, figure out that you're there and that's supposed to work right now. >>: You alluded to another thing is you -- it is by the FCC mandate we are required to use the two channels and fill them up and use everything we can before you're even allowed to go looking in the other direction. >>: You have to certify that you have used up your reserve channels and they expect you to get at least six mics on a reserve channel. That's going to -- that's going to mandate a certain level of technology because, you know, there could be anywhere from two to 12 channels of wireless mics on one TV channel, depending on the quality of the equipment. >>: I was going to make an additional comment. One thing that concerns me is we're discussing unlicensed white space devices, in essence consumer devices. And invariably consumer devices you're going to have companies that are not so considerate as Microsoft trying to build as cheap as they can with the cheapest chips they can, cheapest sensors they can. And even though they are supposed to comply with all the regulations that they are still going to tromp on it. And I just see that happening somewhere down the road. And it's not something that you can specifically address, I'm just -- it is a concern from the audio side that I see interference coming from just that reality. >>: The thing is [inaudible] depending on how much of it [inaudible] these devices if they are FCC certified they would have to meet certain tests. >>: I understand that and I've heard lots of other rules that ->>: It depends what goes in the ruling and what is part of the standards. >>: Sure. >>: So depending on that. At least a certain [inaudible] for example for Wi-Fi devices they do meet a certain [inaudible]. >>: Just in the real world, you know, Mr., I'll pick on Chinese manufacturers, figures I'll fudge the line a little bit here and dump them and they made think money before they even face any lawsuits or the FCC is saying you can't do these anymore and the devices are already out there and there's nothing you can do about it at the that point. I mean, again I understand. You're taking every precaution you can. You are addressing our concerns, and I appreciate that. I'm saying others would not be so considerate. >>: How are these devices used during their transmitting? Are they using -- can one device can talk to another without buying time from T mobile or AT&T or whoever? >>: The white spaces. >>: The white spaces. >>: Anyone can use it. >>: Anyone can use it. So there's no overarching cell or that time and space, it's just [inaudible]. >>: So do you anticipate this being a corporate or even a say really consumer will say, I mean, just I have a a big family and I have a big area that I want to just put out bay stations and I just mount them wherever and -- or do you see this more as a Verizon is going to implement a citywide dash. >>: There are different standards. So there's 802.11 which is essentially Wi-Fi plus plus when you [inaudible] Wi-Fi [inaudible] with more speed. 11 and a half is Wi-Fi on this spectrum which we should be like you buy a whole router and plug it in, buy from Best Buy and it just does white spaces. You can cover your entire house using your whole router which you couldn't do. That's one scenario. The other standard that is being developed is 802.22 which is provisionally [inaudible] that is I want to cover entire city. Then how do you -- what protocol should you take? That's more Wi-Max. 802.11 and a half is Wi-Fi. So they are both scenarios that [inaudible]. >>: In the same frequency bands? >>: Yes. The other one which is more likely to come up is M to Ms -- machine to machine like my smart meter at home talks to the utilities and updates the data, those kinds. >>: Any other questions before we call it a night? Thanks to our presenters. [applause]