Arjmand Samuel: It`s my pleasure to introduce Zheng Dong who has

advertisement

>> Arjmand Samuel: It's my pleasure to introduce Zheng Dong who has been a research intern this summer. He's been working on the Lab of

Things and how programming for the Lab of Things can be made easy using

TouchDevelop. Zheng.

>> Zheng Dong: Thank you Arjmand. I'm Zheng Dong. I'm a third year

Ph.D. student from Indiana University and at IU I work for the

[indiscernible] technology in the homes of seniors group where we develop and test assisted living technologies that protect security and privacy for the senior citizens.

I'm very excited to work with Arjmand and the entire Lab of Things team this summer on the integration of Lab of Things to TouchDevelop.

Now a days a large number of sensors have been inserted inside and outside the houses. For example, we have door sensors, window sensors, the sensors for temperature, humidity, motion and distance. And in addition we might want to control some of the electronic devices such as our printers and TVs in which we might not have any sensors installed on these kind of devices. Two typical questions people might ask would be how can we enable the devices and sensors to be effectively interconnected? And for the academic research, that needs a study to be deployed in a large number of houses, is there an effective way to achieve that?

Lab of Things offers a powerful framework to deploy the few studies across different houses and enable easy data collection from the houses and into the cloud storage. In addition, the framework offers the user the capability of remote monitoring, the system's running status and also even remote updating the applications that have been installed on these hubs. Fundamental components of the Lab of Things is the home OS platform. The software platform is running on a local machine in each house. It's usually a smart laptop computer and known as the home OS hubs, which connects and interacts with different types of sensors and devices.

The Lab of Things that's currently focusing on the home environments.

However, the framework can be actually deployed to many other environments, for example, the nursing houses, office buildings and even in cars. And there's nothing stopping us from deploying this technology in these environments. Now let's take a closer look at the different layers of the local home OS we mentioned in the previous slide. In the existing model, the home OS platform consists of four layers. Let's start with the very bottom one. The device discovery is taken care of by a few more programs named the scouts. If the device needs to be paired or communicated with the platform, then the home OS drivers will also take care of that as well.

Basic functionalities of the devices, for example, of taking pictures for the cameras or monitoring the door opening events for the door sensors are written in the home OS drivers. And also the management and access control functionalities are part of the Lab of Things functionality, I mean the platform itself.

And on the very top layer, we do have the applications, we have a few applications that actually allows the user to interact with different components within the platform. And my contribution this summer in this internship is to build a new layer on top of the application layer, which is called the lightweight application layer and to allow the simple applications to be written by only a few lines of code. And allow people to write their programs on Lab of Things more easy. And I have to also mention that also worked on some work on the device connectivity and device functionality layer which enables the Kinect sensor to be connected to the Lab of Things platform. And I will soon have a demo that showcase the integration of the three great technologies, Lab of Things, TouchDevelop and the Kinect sensor.

Okay. Some more details about the Kinect sensor drivers on the Lab of

Things. With a large set of capabilities on the Kinect sensor I define the following functionalities and operations for the driver.

Any application on the Lab of Things platform can ask for the latest image taken by the Kinect sensor or it can subscribe to that event to receive all future -- I mean all future color images that are taken by the sensor.

It can also get the depth information and for depth information we enable both the depth matrix and the depth images to be returned by the

Kinect sensor's driver on the Lab of Things platform. The Lab of

Things driver also enables audio recording so you can actually indicate a duration of the audio recording and will record for you and store the file locally on the hub. And at last, it returns the detected skeleton information and stored it locally so that any application can just fetch the file and for further analysis.

Okay. So the Lab of Things is a very powerful framework and can support a large number of applications and many of them are very innovative to be created based on the local platform different devices and drivers, sorry, devices and sensors and also cloud components itself. The traditional Lab of Things application follow a browser serial model. So in order to write an application on Lab of Things, you really have to follow the WCF service approach. So you first have to start Visual Studio and create a project and you have to define a number of endpoints and the functionalities within here I have I service.cs. You have to expose a number of endpoints. And another very important file for creating the Lab of Things set would be the service.cs file where you actually implement this functionalities and the endpoints you already have indicated. And the developers might

also want to write some code to kind of respond to some standard Lab of

Things events. For example, what actions to take when a new device is registered with the platform. This is something that the developer needs to write about. And we can imagine that entry level developer to take a week or so to get themselves familiar with the entire framework and write some code on C# and also JavaScript, maybe index.html and all of these things. And they might have to write maybe 100 lines of code to implement the basic functionalities.

However, we have to realize that it's very hard to write the .NET application on Lab of Things for a reason, because they're very powerful. We can create actually a project to do image processing. We can do motion detection. A lot of cool functionalities. However, we have to also realize that there are also some of the user population that is also from the field of nursing, for example, and health infomatics. And these users are not necessarily hardcore developers.

They might not know about C# or Visual Studio at all. But we would like to allow them to create certain applications that are only involved in very simple logic. So, for example, let's imagine that a user has the Lab of Things of system deployed in their houses and then he wants to write a program to just visualize the data he collected from the sensor. This is a very simple logic. And we don't need the user to write the whole dot cs files and html files and JBOS files to implement that. And we should allow writing the code to be simple.

And let's review the requirements again and see if we can somehow reduce the number of requirements but still achieve the optimal performance and the functionalities that we can get from the Lab of

Things platform. The first question would be: Can we actually eliminate the requirements for C# and Visual Studio? It's actually okay. As long as we can make writing the program as easy as doing a few function calls and combine the function calls. In this case these end users can simply create some user, some end user side, I mean the client side scripts, right, for example, JavaScripts, and just do the function calls and get the results. And if we are able to achieve this goal, then we don't need to require Visual Studio installation and the use of C# in this case. And also I have to mention that this is a scenario that is specific to a particular circumstances. So we only think that this would work if you only are interested in some very simple logic, not something that we would do, for example, image processing or the audio processing and motion detection.

And we would like to offer the great programming experience to the Lab of Things users but we would also want to eliminate the changes to be made to our existing framework. And also the local platform. So one of the solutions would be to build a very special application on top of the application layer and it's called the generic application. We call this generic app because it's not designed just for one or two specific devices. Instead, it can potentially talk to everything, every sensor and every device that is connected to the Lab of Things platform. The

generic app is running on the Lab of Things platform as WCF batch service and the problem becomes how would an enduser interact with the generic application. We can imagine two ways that the communication would take place.

The first one is through the generic app's home page, like other applications on the Lab of Things platform, our generic app also has a user friendly Web interface. So if you go to the Web page, they actually show you that the developers have to fill out A and B in this logic. So if A, which is an event, happens, then we execute an operation, which is B.

And in this screen shot, the developer actually chooses if the door sensor is triggered, then it will ask the Kinect sensor to record audio for an arbitrary length and that's what this page tells you. And as you can see from this screen shot, the logic actually that is embedded in this Web page is kind of fixed. To add the flexibility and also better user experience to this kind of, to this generic app, we have to consider other ways of talking to the WCF service. And one of the other options is to talk to the generic Web service directly through

RESTFUL function calls, and of course the developers could choose to use some third-party applications to help the developer make the call.

But I mean both of the methods are okay. And what is my point here is the generic app really opens up the world of easily creating the Lab of

Things applications. And to add flexibility and better experience, we really need to consider something else, something with a better user experience and good functionality. And TouchDevelop is something that we have considered that should be a very good tool to interact with our

Lab of Things generic app. The TouchDevelop is the technology designed and implemented by Microsoft Research. It's an app creation environment for mobile cloud connected and touch devices. Anyone can visit touchdevelop.com and work on their on scripts. With the help of

TouchDevelop APIs, the developer who wants to write a good program can just reduce their lines of code that they will need to write and we choose TouchDevelop to connect to our generic API for the following reasons. First of all, TouchDevelop has a very user friendly interface. So you just go to TouchDevelop.com and you make a few function calls to our generic app and also TouchDevelop.com would offer you the environments for code debugging. It can show you the variables and also the stack status, which is very cool. And in addition the rich set of APIs is actually the second reason that we choose

TouchDevelop. TouchDevelop has a standard API for displaying images and also the audio. So the developers do not need to worry about how do I actually implement that.

And the third one is mobility. Just imagine that people have Lab of

Things deployed in their houses but they might be away from their home hub. With TouchDevelop they might also be able to write simple programs with Lab of Things and have it deployed on the hub.

And the next one would be to support standard http and JSON.

TouchDevelop has the library to make the post calls to our generic app and it also can create and pass the JSON object, which allows the user to talk directly to the generic APIs. And the last one, support the export programs. Well, this functionality is pretty cool because I can actually create a Lab of Things application and export it as a library.

So anybody else who is also interested in using the Lab of Things can also just use my library and make simple programs.

Okay. For each function call that the generic Web service provides, there should be at least one function on the TouchDevelop side to wrap the input parameters from the end user. So, for example, send e-mails is one functionality that is provided by the generic Web service. And, of course, there is something on the TouchDevelop side that is doing sending e-mails. It accepts the user inputs and makes the right function call. But on top of that, we might want some other processes to be standardized. For example, instead of just sending a text e-mail, we might want an image taken by the camera to be attached and sending out e-mails. And on the TouchDevelop side, we have made send e-mails with picture available as well. So including sending e-mails with picture and sending e-mails as two functions inside the library where we are enable the users, the developers to actually choose which one they want.

Now let's consider an elderly care scenario where the Lab of Things technology can apply. One of the challenges that the senior citizens could live alone would be the severe risk of wandering that comes with the onset of dementia and other cognitive disabilities.

In the elderly actually suffering from these problems may leave their house without proper clothing or at an unusual time and just go wandering. The second risk is the social exploitation. Now, the elders are known to be a risky population and [indiscernible] and we have existing .NET application on the Lab of Things platform that can address this problem, which is the alerts app. In the alerts app, it actually makes use of the door sensor and Web camera and the logic of the alert tap is whenever the door opens it can take a picture and sends the picture as well as alerts text through e-mail, in this case to the caregiver and then upload the picture and also the alert text to

Windows Azure to the cloud.

And now the question is can we actually do the same thing with

TouchDevelop and using the Lab of Things library? Yes, we can. And

I'm going to show you how to do that. Can we switch to the demo machine, please? So what we do is we first want to go to touchdevelop.com. After signing in, I can find the Lab of Things.

Thank you.

I can find the Lab of Things library. And I would like to make sure that the Lab of Things library is referenced here. And here my

application is called Lab of Things demo. Before I write anything, I would like to make sure that my URL, this is the URL to my Lab of

Things hub. And this URL is included and also I have put in my e-mail address here so that I don't have to type it when I'm receiving the alerts from the alerts app. And then here I have a few names defined here, which are the Kinect's name, the camera's name, the Azure name and things like that. Key and directory. The first task would be to initialize the connection, and I will use the hubs URL and just call the initialization function from the LOT library. And then I want to make sure that the wall is actually displaying the objects in the right order. Let's first try something quickly and see if the Lab of Things platform is running correctly. And I want to make sure that I can do show all the devices that are currently connected to this platform. So it's just one line of code. Lab of Things and then list all devices on the hub. And there you go. I should be able to run this. There we go. So here with only one lines of code I'm listing all the devices in the sensors that are currently connected to this machine serving as a home OS hub. I'll go through them one by one. The first one is the camera that is actually, that is actually of the laptop, and My Kinect is the name of this Kinect and then we have Z sensors and most importantly the door sensor connected to the Z wave dongle to this platform. So it's working. Let's go ahead and create the alerts app.

Remember the first thing that we need to do is we want to subscribe to the events. We want to have the door opening events delivered to our scripts, right? So the first thing I need to do is to make sure that

Lab of Things, subscribe to event, is working. Sorry. Update the stream. This way. Watch events. Okay. So here for watch events, I'm actually subscribing to these particular events. And this particular function needs to have the device name which I already have it here entered. That's the door sensor's name. So it should be -- it should be X. Okay. Very good. It should be X. And the next one is the device type. This is a sensor. So I will just put sensor here. And the third one is an event name. And in this case it's just get. I'm getting this status. So, okay, this function call is completed. To see any response from this sensor, we need to also create something like this. Lab of Things. And we need to get the new events, right?

So in this case we want to do get new events. Let's give it a name.

Sorry. This is the name of the -- so this is the device name. Device name is X. I'll put it here. And okay so this particular function returns two parameters, two output parameters. One is has new.

Indicates if new events is detected. And the second one is events, which is the actual events that is received. Okay. How about we do, we ask the program to sleep for five seconds and then -- sleep for five seconds. And then we can make sure that if new event is received, then we print out the events. Let's see as new is two. And then we just put the event -- event is a structure in this case. And we want to make sure that event device name is actually printed to the wall. And to make sure that it doesn't interfere with each other, I just want to make sure that -- okay. Let's do that. And let's ask the platform to

sleep for ten seconds in this case. Sleep. Ten seconds. I think that's it.

Let's see. If I'm triggering this thing, it should get an event.

There we go. It actually prints out the sensor that has been triggered. And since this is actually a structure, the event is actually a structure, I can also put other members, for example, the time is triggered and the location of the sensor in this loop.

Okay. So the next step would be to make sure that we can trigger the camera to take a picture. And maybe we can post that picture to the wall. And the next step, I can do it this way, let's see get image.

Okay. It's just one line of code. And at this time I need the device name of the camera. So since I have already put it here, I can just say camera name. And it's actually returned two variables, sorry, two parameters. One is picture and one is picture stream.

Picture is actually, picture objects was in the TouchDevelop and picture stream is a base 64 encoded picture stream, which can be used for WCF Web service. It can be actually posted back to our Lab of

Things platform. Okay. So we want to make sure that the pictures can be posted to the wall. Post to wall.

And in this case you should be able to see an image. Let me trigger that. There we go. This is myself. Okay. So that's just one line of code. All right. How about sending the picture to the caregiver through e-mail. It's also one line of code. All right. How about doing that. E-mail -- send e-mail with picture. This is an example that I gave a few minutes ago. For this function to work, you have to put the subject here. The subject's, let's make a fixed subject called

TouchDevelop test. And the next parameter is the message itself.

Saying this is a test. And put a period here. And the third one is the e-mail address. I already have my e-mail address here. So I will just say e-mail and just use the variable. And the fourth one is the image that I would like to attach.

So in this case it's going to be the picture stream. So I think that's it. I will just run this and once it's triggered. I believe it will take a picture and this will also send an e-mail to my e-mail account.

But it may take a while. So I will just show you when it arrives.

All right. So this is the alerts app. When the door opens, it will take a picture and then send the picture to the e-mail and also potentially the Windows Azure cloud. And as I mentioned -- as I mentioned, in the slides, a few slides ago, I also worked on some interconnectivity stuff which actually enables the connectivity of the

Kinect sensor with the Lab of Things platform. So can we actually replace the Web sensor, sorry, the Web camera, with the Kinect sensor and provide a rich data of information. This is actually my next demo, but I would just make changes to the existing one to save some time and

just show you how, oh, there you go, our alerts that we received.

First show you that. So this is the alerts just I've sensed from our test scripts, through TouchDevelop. All right. So let's do something to the existing alerts step and make use of this guy, the Kinect sensor, and get the color information the depth information and also record an audio. How do we do that? It's actually pretty simple since we already have the logic built here, we will just replace the get image here with get color image, Kinect. And I have also put the

Kinect's name here. So I will just use the variable, Kinect name, and it returns the picture and picture stream. In addition to the color image I'd also be interested in seeing some depth information. And in this case I would say Lab of Things and then get depth Kinect. All right. And I still need to put my Kinect name here and it's also returned as two strings. In this case let's call it picture depth.

And this one is -- okay. That's okay. Picture stream two. And I wanted to make sure that both the picture depth and picture depth two is posted to the wall. Let's see. This is posted to wall. Right.

Let's see. What's triggered. There we go. It takes the color picture and also returns the depth information and this is actually the depth image that is taken by the Kinect. And you can also get the depth matrix, if they choose to do that, and use the right API on the Lab of

Things library. And how about we add the third functionality to the app, which is recording like record the audio for 30 -- maybe 10 seconds. Let's do that within one line of code. Lab of Things and then how about audio. Okay. We have a record audio Kinect. As the first parameter, I need to put in Kinect name and the second is indicating the duration. So in this case 10 seconds. And we can actually run it again and it will -- sorry. We can just do that. Oh, another alert. So I need to trigger that. Okay. And it will take the pictures and, oh, sorry, I need to find this file. So it will actually do the audio recording for 10 seconds and then save the file locally on the hub. So I will need to find this file right here, I will just put -- okay. So -- [okay, so I will need to find this file].

Okay. So this is the audio recording part. And so and it's working.

And the next one would be to send the e-mails. And you've already seen that the sending e-mails is working and how about we also get some skeleton information that is extracted from images that are taken. And in this case we can do Lab of Things and then skeleton information.

Skeleton -- sorry. I should delete that. Skeleton. There we go. Get skeleton Kinect. Now, at this time I only need to put skeleton -- sorry, the Kinect name here. And I want to store the results in a local variable, which is stream skeleton. And then oh post it to wall.

Post to wall. And let's run it again. When it's triggered, we'll take -- again, it's taking the color image and the depth image. It will start the audio recording and there we go, the skeleton information. It's actually recorded in a local file and any amplification on the Lab of Things who are interested in this kind of information can go ahead and grab the file and pause it and do some

interesting stuff with that. So any questions about the two demos?

All right. Can we switch back to the presentation?

All right as our next steps we'd like to test the usability of Lab of

Things of TouchDevelop library. After we deploy the Lab of Things platform in people's houses, it will be interesting to see how end users would actually interact with the touch enabled programming environments. What type of Lab of Things applications they want to write and for how long can they create that kind of application.

Privacy is another issue. Besides researchers who deploy the experiments in people's houses, we also expect the Lab of Things users who maybe know about TouchDevelop and want to use that to write simple programs to write simple programs but we also want to know how do we isolate the resources that is allocated to each of the TouchDevelop scripts? Like how do we prevent users to use each others' devices, apps and local storage? How do we prevent that? And this is still the research questions we will need to answer in the next step. And at least, the speech-based programming. We do realize that at UC Davis

Professor Jeng Du Su [phonetic] has implemented the synthesizer to capture the keywords of the end user's speech and automatically match the closest comments on TouchDevelop. It will be an exciting venue of research if we can expend this kind of work on to the Lab of Things library so that the Lab of Things user can just briefly describe the task to be completed and then the synthesizer would capture that keywords and match the corresponding Lab of Things events for them.

And at last I would like to thank the entire Lab of Things group and also TouchDevelop group for your help and guidance and I have to say it's really fun to be here this summer as a summer intern, and thanks very much for your time to come here and I would like to take any questions you may have. [applause].

>> Arjmand Samuel: Questions?

>>: So you're listening on your laptop, but ideally I would have my smartphone or tablet, do some scripting and then okay, to set up a house, it's just one house, what's the idea I need some computer somewhere?

>> Zheng Dong: You will need like the Lab of Things to run in your home environments. This is the hub that is running inside your house.

And at the same time you can actually work on your scripts from anywhere. You can just access that. Because for each hub we have enabled the remote access, I believe.

>>: What's the cheapest hub I could put into my house?

>> Zheng Dong: That's -- I might not be the right person --

>>: The laptop, which is a very low end, [indiscernible] PC Windows 8 machine running with the Lab of Things home software on it. WIN 7

also. And Kinect devices. So the devices are Z wave that he's connecting on.

>>: It's our plan to get that down. It wasn't exactly the project internship, but if you totally separate the user interface from how this thing runs, I don't think you want to maintain manage another

Windows 7 PC.

>>: But you need a PC running presumably all the time in the home so the scenarios -- if you have a PC.

>>: PC is the server. So control --

>>: PC is the place where all interconnectivity happens with the devices deployed in your home. Ideally it should be on all the time.

So any PC would do that.

>>: Updates and all that.

>>: Well, yes, but the Lab of Things gives you an interface with the updates for cloud.

>>: You're asking if you, like whether or not you can run endlessly.

Yes, we have remote update capability that if you don't ever want to touch --

>>: Costs $100. No user interaction whatsoever. It just works.

>>: Maybe [indiscernible].

>>: We have had issues. I think Microsoft's hypervisor does not do USB direction. You have USB devices, [indiscernible] but I think the answer is that the precise answer to your question is what I'm saying like any PC that runs Windows 7 or 8 will do. The cheapest you can probably find is 150, $200. The reason it needs to be always on it's getting events from the devices. So if it's off and your door opens, you will know that it opened and so that's the utility goes down if it's not always on. But with respect to Windows update and stuff if you don't want security updates you can switch them off.

>>: Yeah.

>>: Actually, I'm Sal Lark from the Embedded Team. Windows Embedded.

One of the products we have, Windows Embedded 7, it's an OS. So lets you for these type of scenarios you pick and choose the components of your model. You don't want notepad so you don't give services for services notepad, you can exclude that from your OS. And one of the things we are -- that is one of the products we have right now and we are thinking about feature products, having similar capabilities for these type of scenarios, home OS these things.

>>: What's that product called?

>>: Windows embedded standard.

>>: Can you pick which version of .NET you can run on that?

>>: Yeah.

>>: We need 4.5, 7 or 8 it doesn't really matter?

>>: What's the minimum SKU you run that on?

>>: Smallest footprint.

>>: The smallest footprint I'm on the PM side. I can't remember exactly. But it's -- [laughter] -- it's in megabytes. Hundreds of megabytes, not the gigabytes.

>>: That's cool.

>>: I have another question. So your script had a start button, so you were basically starting a script and then checking if there was an event.

>>: Right.

>>: Now for the scenario you described this wouldn't work. You would want this thing to run when the event happens. So how would you make that happen?

>>: We could just leave the thing running. I mean, currently we can actually make some function call because the rules app, which is the generic app is running on the server or the hub. We can leave something on the hub to run for a while and let the hub take care of the functionalities. And we can just make the script to start the functionality and let the hub take care of the rest. So that you can actually close that.

>>: Do you have a plan to be able to push a script into a server?

>>: Yes, that's also something we need to consider, not just because of the functionality but some of the security issues we were considering.

Like because we want to turn on the authentication part. And that is offered by the Lab of Things platform. And there's several issues with the cookies and also the tokens and things like that. So we're considering that, and we're having a discussion.

>>: How easy is it to add new devices and sensors so that you can just call them by name in the interface?

>>: Add a new type of sensor or add a new --

>>: Could you do something -- could you connect something like a nest.

>>: A nest.

>>: Sure, can we switch back to the demo machine, please?

>>: Voice commands work perfect. [laughter].

>> Zheng Dong: This is my laptop, and it's the -- the Lab of Things dashboard. And where you can actually add new devices through this portal and if something that has not been registered will show up here and you can actually click on one of them and give it actually a friendly name, for example, Web Cam 2 and choose the location and also choose the applications that actually have access to this device. So in this case I can choose that. And we'll show up in all the applications that have access to this. And the rules app which I mentioned, you'll see a number of devices that is actually connected to the platform. It doesn't have a very pretty UI, but you get the idea of the functionality parts.

>>: So is that picking up devices that are being picked up by the Wi-Fi on the computer and/or the bluetooth?

>> Zheng Dong: Yeah, as long as it's connected and recognized by the home OS platform, I think you will see --

>>: For the devices we have drivers for, like IP-based camera, it would just -- that set up flow would just work, and we have a driver for a bunch of Z Wave devices. So any Z Wave device it works and for Kinect you have to actually write the driver and get that working. But it's all just.NET code.

>>: So you're asking about the code what happens when it captures those things, because then I guess to use this in TouchDevelop, expand --

>> Zheng Dong: Right. Eventually we'll have one function corresponds to the one function call designed app. You don't have to write additional APIs. We'll make all the APIs that we discussed available as part of the library so that people can just go ahead and use the menu. It's actually not related to the device type. If you're adding a new type of device you're still using the Lab of Things model. So you can just --

>>: The way you reflect those, when you sent -- you had one API

[indiscernible].

>> Zheng Dong: Right.

>>: And the string it's just the command names, right?

>> Zheng Dong: Yeah.

>>: Now I guess it would become better if it was a separate line for I don't know eye conditioning, which would have methods set temperature?

>> Zheng Dong: Right, set temperature --

>>: It would be a command and you could [indiscernible] by putting the set temperature that you need to know about it. But that's very discoverable.

>> Zheng Dong: Set temperature is -- well, there are actually two ways of approaching that. One way is because set temperature, if you do have that as part of the operation to the AC, which is a device type, right, if you have that defined in the Lab of Things platform, then the rules app will be able to return that operation for you. You don't have to make any guess because I do have basic functionality to tell you that there is an operation called set temperature. And then --

>>: I was going to say just leave -- the reason this works is device functionality itself is attractive in OS. So that gets expressed in terms of rules which I think of them as service classes that of operations which essentially operations like set temperature. What Zen does, he basically takes those definitions and he converts them into something that can be called. If you wrote a home [indiscernible] driver or something and then those operations would legitimately become visible to his stuff without actually doing anything special, beyond writing just a home OS rule or driver.

>>: It will shown here as part of the option. For example, the Kinect sensor and I do have the following functionalities available to that particular role. And it's a new one.

>>: Defines just like a class, and you can go to it according and the class and define a bunch of [indiscernible] there.

>> Arjmand Samuel: Any other questions? Well let's thank him. Thank you.

[applause]

Download