Document 17856963

advertisement

>>: Okay. So without further ado, the last talk is about giving away stuff, right? That's, it's the free thing right at the end, the reason you've hung around and waited for. This is Daron Green who is going to be giving our last talk. Daron is heading up our Azure for research program and he's going to talk to you about how you can use the cloud for your research and beyond.

>> Daron Green: Actually I'm going to do a range of things. Thank you, Chris. I'm going to cover a range of different subjects. I've got about 156 slides. No I haven't. [laughter]. You've sat through a lot, so this will be a little bit lighter than some of the other presentations. It's not that there's anything wrong with the other presentations. What I wanted to do is explain to you a little bit about the umbrella activity that I agreed with Dan Fay who has chosen, oh. He's there. He is there. Dan actually leads the Azure for research initiative. I helped him with that.

You are part of it, whether you realized it or not. The fact that you are here means that you're part of the Azure for research initiative. You will have seen some of the imagery that we used for this. This is a sticker that we've just created. The idea is we are little lab coat wearing scientists and we benefit from the personification of our cloud is this giant robot that does the number crunching for you. The reason why I want to start with this is to share with you a very personal experience that I've had with the cloud, because when we started out on this journey which is about eight or nine months ago that we -- Dan’s nodding so he approves of me saying eight or nine months, which is good. So a turn on months ago we started out and actually put a plan together for helping researchers understand what it means to use our cloud computing infrastructure, because it was originally built, as you've heard, for a different purpose. A lot of the training material, a lot of the content, a lot of the examples that we were talking to people about was very business specific. We knew that that didn't work, didn't make it accessible to researchers. Here's a picture of my kids. Why am I showing you a picture of my kids? This is

Ben on the left; this is Jack on the right. They got to hear about the fact that I was working with cloud computing and they heard about cloud computing, but they didn't really know what it was. They asked me to explain and I said look. It's a little bit like the computers that you currently have access to at home, you play games on, but you're sort of logging in remotely and it's not just one computer. You can scale up to a vast array of computers. I've been explaining cloud computing to them for some time. I went off to a conference and at this conference there were -- I'm hoping none of you were at this particular conference. There were about 76 computing services heads, so these were people who run computing services in universities in

North America. This was back in 2013 and I was talking as part of a panel discussion about the way in which the cloud is used by researchers, and many of them were saying no, no, no. You know, we don't really need to cloud. Our researchers don't use the cloud. They have no intention of using the cloud and in nearly every case the universities were using the cloud; the researchers were using the cloud, okay? And in some cases they were very heavy users of our cloud, and it came as a surprise to them. For many it was quite disruptive because suddenly they went from being the gatekeepers to computing to being sort of dis-intermediated. The researchers could get a grant and go straight onto Azure and just start spinning up machines and getting work done. That made them feel a little bit strange. You know, what's their role?

It's a little bit like libraries that used to be the gatekeepers to the sacred texts. Now they have a very different function about curating digital assets or discovering digital assets. When I came home, I came home from this particular conference and was saying these guys don't know

what's going to hit them. Computing is changing. It's a paradigm shift. My kids were saying well what do you mean. I said look. Some of them were anxious, okay? A great sense of anxiety, their loss of status, loss of control, loss of influence, they're outdated. They feel threatened. And what I was saying to my kids was what they should see is that actually there is some freedom that they can get from this. They don't need to be trying to deliver stuff, the same kind of stuff that they were doing before. Actually, it's an opportunity for them to innovate. They need to change the nature of the services that they're providing to the folks in their universities or in their computing communities, research communities. Little did I know how this would come back to haunt me. These guys then said, okay. That's fine. That's great.

Can we play with the cloud? I couldn't think of a good reason why not. I'd given them access to computing infrastructure before, so I said okay, fine. I then showed them, if you haven't already logged into Azure, hopefully you will do so at some point in the next few days or weeks.

I then showed them how to log into the management portal. I didn't give them the password, all right? But I then, I logged in and showed them look. This is how you spin things up. This is how you get attached storage to a virtual machine that you've created. This is how you do an endpoint. This is when they said, okay dad. We've got it, which is sort of code for leave us alone. Get out of it. We know what we're doing. And I said, but no. Let me show -- no. Okay?

Sure enough, I realized at that point that I really represent two things for them. I'm money and technology and that's it. All right? And they had just gotten access to some more technology and they wanted to be left alone. Okay? Fine. About an hour later this is what the screen cap looked like, and for those of you that know Azure, you'll recognize most of the dashboard.

You'll see that they got a storage account at the top. They got a couple of min cra, and you spoke already. Minecraft, all right? They've got Minecraft running. Okay. Fairplay to them.

Pretty good. Sure enough, this is a fairly recent plot and I'll explain a little bit more of this sort of stuff that they're doing, and yeah. They've spun up, originally, they just had a single CPU instance. That's all they needed for the two of them to sort of play. Okay? At least that's what they did for the first day or two. Actually, I felt quite good about this because I'd had a tiny little PC that I had been using to run Minecraft at home, within the house. I was doing the patches. I was doing the server updates. I was doing performance monitoring. They would be complaining about latency every now and then and go and have a look if something was going wrong and if I could stop something streaming so they could have a better gaming experience.

And every now and then I took the firewall down so that people could actually play on the same servers, so friends that were local could play. It was quite an overhead for me to have to do that. Okay? So I was quite pleased and not having to do this stuff. They were really pleased because hey, they're running on one processor, but they can scale it up if they want to. They were doing data sharing so they were both putting files onto the Azure server and then working on them. They were having shared experiences as I would find with their friends and actually they started collaborating and innovating within just a couple of days in ways that I had never imagined and I want to show you what they were doing. They were given an assignment. This is totally unrelated. They just had this assignment where they have to work together as part of a team to design a space station, explain the function of the different parts of the space station and how they arrived at the overall design. They came back and said we've got this assignment that we've got to do. It came down from [indiscernible]. We've got this assignment. Great.

They run upstairs, started beavering away and after about 10 or 15 minutes I wandered by the

bottom of the stairs and I could hear Minecraft noises. [laughter]. All right? So guess what they'd done. They'd opened up the Minecraft server to the group of friends and they were using the instant messaging in Minecraft to coordinate the different bits of the space station.

They were then screen capping, like this, screen capturing and pasting it into PowerPoint and they were putting a narrative and notes onto the design points. And they were doing this; they were flying into or walking into different parts of this and then taking the screen caps. All right?

This is what they did when I gave them access to the cloud. All right? From my perspective, I then went through all of these things [laughter] anxiety, loss of status, loss of control. All right.

That's not what I expected them to do with it. Okay? Like I was the one that provided IT to them. That was my function. Now I'm just a source of money [laughter] and nothing else. All right? And I actually literally -- no. You can't. You know, and then I actually stopped myself. I was repeating exactly, you know, the sort of paradigm shift that was threatening the folks at the universities delivering, competing -- I've got this with my own family. All right, so? What do

I have to do? It gives me some freedom. I have to find other ways of innovating and I'll come back to there. Okay? All right. So that's a sort of very personal experience. Here's the magic, though. Here's the real magic I think. I never give my kids access to a supercomputer, or at least I wouldn't have thought of. They'd never get access to a supercomputer, but the cloud gives them that, exactly the same infrastructure and fabric with used by running 20,000 compute cores to analyze, you know, billions of pairs of genetic markers. This is work by David

Heckerman’s team. It's part of this sort of research that Tony is in charge of. This is just one example. There are other examples that you've heard of in the past couple of days. But for me, there is something profound happening when my kids can use the same infrastructure that can do this type of research. That's a fundamental change. They would never get an NSF grant.

They would never get a DOE grant, or maybe not for a few years, at least, but they're already running on the same infrastructure, pretty remarkable. All right. So what are the implications of this? I'm going to go obviously much deeper than the sort of time that my kids can spend playing Minecraft. This speaks a little bit to some of the issues and comments that we were wrestling with before and talking about over lunch of the way that the cloud can even ultimately change the economics of the way that science is done. Imagine for a moment that you get a proposal approved. You might have to go through some recruitment depending on whether you got people, people resources available, but actually pretty quickly if you do have people in place you can deploy a community, your own research group virtual machine in the cloud. You can extend your models. You can do whatever simulation work or analysis that you want, and then simultaneously you can do the following things. Again, this is pretty much uniquely enabled by the cloud. You can publish your paper. You can publish the virtual machine, the software stack that you used to conduct the research. You can publish it in an open way or you can publish it in a restricted way to your community. You can publish the data. Other people can reproduce your results. The scientific method actually is more enabled by the cloud. Without this sort of approach, you are begging to get access to people's software.

You're begging to be given access to your data. The paper may have been published but as funding agencies insist the data generated from research that's funded by the public purse must be made available, the cloud actually gives us a mechanism for doing that. Perhaps the most controversial in this, and this relates to sort of a positioning paper that Dan Reed, Dennis and I wrote with Elizabeth Grossman who is one of the policy folks here in Microsoft. We were

asserting that actually allows the opportunity for researchers to create a marketplace for their data or their research services to be procured. What do we mean by that? We mean that industry actually can discover these published virtual machines or some of the exposed data and then say okay. Can you run that analysis but on our data set, which is also in the cloud? It gives an opportunity for researchers to offer different services and those services can be in a range of different levels, okay? It could be that you've got some runtime, special runtime or data services that you do some special data management or application services, or that you are going to provide just some fundamental research if you give us access to your data, whatever the commercial companies we will do analysis of. In the same way that you can publish your capabilities here to the research community, you can actually create a marketplace, or you can be active in a marketplace where you expose some of these commercial customers. Why would you want to do that? You generate an additional revenue stream. Maybe you fold some of that back into your core research activities. With a lot of this in mind, we put together the usual for research initiative. What we're doing is really a range of different things. We're providing free access to cloud computing. We are offering the training that some of you probably did at the beginning of the week, and we're providing some technical resources, and I'll mention some of those, and some support capabilities. I'm going to step you through some of the things that we're doing there. The first one of these is the training. We just finished in the last quarter or so we've done 14 training events. We've got another 14 to the end of Q4. That takes us up pretty much to the end of the year. If you are interested in hosting training, let us know. Dennis will guarantee that we'll deliver it at your university. [laughter]. No. We can't guarantee that we can deliver it to your university. We are interested in knowing, though, if you think that the training you've done this week, or if you are not aware of the details of the training, we can take you through that if you think it's going to be useful locally at your universities. You can see where we've already deployed and the scheduled ones are in this sort of beige color. That was the training. The other sort of key thing that we are offering if you are not already aware is the rewards. This isn't money that we give you. This is access time on Azure. These are Azure passes that last a year. They are worth about 180,000 core hours, approximately 20 terabytes of data, so that's roughly the sort of footprint. You could use that allocation more quickly than a year. If you did, we would probably ask you what you were doing, not in a negative way, though. What were you doing?

What were you thinking? But in a hey, what were you doing? How could you use this so quickly, because we're just interested in the workloads that you're finding work really well. And typically, what we're doing if people do consume their allocation more quickly than a year, we ask them the question what are you doing, and then we will probably just bring you another allocation if there is something that you need to complete. All right? So we are very flexible and right at the moment, Dennis, correct me if I'm wrong, but we have no hard limits on those allocations.

>> Dennis Gannon: That's correct here, hopefully later. [laughter] you mean on the number of allocations?

>> Daron Green: No. On the footprint that we're granting to people.

>> Dennis Gannon: Yes. We have lots.

>> Daron Green: Okay. We are. Okay. So what does it mean to actually apply to one of these?

This is pretty much it. The application process is pretty much what's on this slide here. You should be able to answer all of these fields. If you can't, don't apply. Things like name, position, title, the hardest thing you have to do is we want little abstract and then a proposal, no more than three pages. We struggle to stay awake if it's longer than three pages, no. Three pages is usually enough for us to get a handle on what it is that you're proposing. So the relatively short, relatively lightweight things, we take submissions every two months. On the

15th, that's the middle of all even months, all right? If you submit before the 15th, we will take a look at it, but you sort of don't be wanting to submit on like the 17th or the 19th of the even months because you'll have to wait a couple of months. Typically, we take proposals every, we evaluate them every two months. We've granted 183 projects covering 28 countries and you see the geographical distribution there in the glorious pie chart on the right hand side. There is nothing geopolitical in these allocations, so don't read into this that we have any particular, you know, desire to not invest in Germany. That's just that we haven't had as many proposals from

Germany. There's a little bit of an artifact of -- you see, wow, there's quite a few from Brazil.

Why is that? We did the training in Brazil as one of the first places, so we get more proposals coming in from Brazil. If you don't see your country represented there then stick in a proposal.

As well as these every two months there's a valuations that we do. We have some special causes. There are some things that we are particularly interested in receiving proposals for.

We recently had one that was calling for virtual machines, virtual machine that may be relevant for your community. We are particularly interested in would you develop those and then publish them. We also did one in conjunction with the White House on analysis of climate data.

We’re going to do a very specific RFP for Brazil. We did our original training six or seven months ago. We're about to go back and do another round of training and we're going to have a call shortly for Matlab use on Azure, but I'll come to that in a second. We've got a major event coming up in the summer over in Russia. Some of you may be involved in that, but we are also open to suggestions on proposals. If there's other things that you think we should be asking for here, if there are other events that you think we should be creating to help us understand the way that researchers want to be using Azure, then just let us know.

>>: According to that previous talk we should have a Python call.

>> Daron Green: We should have a Python call?

>>: An iPython call.

>> Daron Green: iPython. Good suggestion. Okay. So those are the, no, still research world.

Delving in a little bit deeper by discipline, I wonder if we should do this sort of analysis that

Geoffrey has been doing on the nature of the application and using some of the taxonomy that you've got. This is just by discipline. I'm rather disappointed by the chemistry showing, given that I PhD is in chemical physics. That is disappointing. A lot from computer science, and then digging into this segment here, you see the there's quite a long tail, but machine learning, big

data analytics and that sort of stuff. Not that these are necessarily completely distinct, but it gives us some sense of the sort of workloads that people are targeting in here. I think, Dennis correct me if I'm wrong, which you probably will, a lot of the things that we're seeing in the computer sciences where there are computer scientists working alongside under the -- yes. I got that one right. Thank you. Good.

>> Dennis Gannon: In the bio area, there's usually a bio person in charge but working with a computer science person and that's why Janel makes it look small there because it's all classified environment.

>> Daron Green: Okay. I'm going to go a little bit into these. Some of these you have had indepth presentations on, but this is just a little bit of the sample. I don't think Harrison Ford is actually one of the PIs though. [laughter]. He did pose for the photo, which is rather good.

Two common requests that we've received and this is responding to the request, so from some of the training event, some of the RFP winners we get questions like, okay. Why do you have

Matlab running in the cloud, and how do you help me? Do really simple parametric sweeps.

These are our responses. The first one is making sure that Matlab distributed server is working on Azure and this is just the graphic to support that claim. To use this, you need to have a

Matlab client. You need to have a license to a Matlab client. You can then schedule on to a

Windows HPC server from which you can then provision additional resources up in the cloud.

We've just gone through the process of agreeing the licenses that we own for this site, so we provide the licenses for this site. You need to come with the license for the client to this site.

>>: Does that mean though that you have to have a DCS license on the client?

>> Daron Green: You have to have it on this site.

>>: What kind of a license is it?

>> Dennis Gannon: Client only, you just need the client’s license. We provide the distributed computing server licenses. You just have to have an active dynamic version of Matlab. That's all you have to have. And the MathWorks will activate it for you, verify that you're a user, and we're working with MathWorks on this.

>> Daron Green: We definitely know that there is going to be a follow-up for this. What we are looking for is actually -- this isn't broadly announced, okay, so most likely from the constituents in this room we will fully populate the roughly 5 sort of earlier adopters. We're looking for a few folks that are familiar with Matlab, that have workloads that they would like to push up into Azure. If you're one of those people, already you've probably noticed that Wen Ming is active on this; this is the e-mail and just put as your title Matlab pilot or just talk to him before he escapes at the back of the room. We will, once we've ironed out this, once we've had those earlier doctors sort of working and checking that things are behaving the way that we want, we will have an RfP that will broadly invite people to propose workloads using this. Okay? But we don't want to open it up to a large number until we've ironed out all of the problems. There

may be problems. We need whoever those earlier doctors are to be a little bit sympathetic that things may not be perfect on day one. We're not coming in with nothing so this is the first of a

64 page?

>>: Not quite that, minus graphics, it's about 20 pages. It's on the technical papers section of

Azure for research website.

>> Daron Green: This is something Wen Ming has been offering to help people to get started.

It explains exactly how you set all of this up. Okay. That was the first thing that was sort of announcing, quietly announcing, that we'd like you to engage with us on. The next one is something that we've codenamed internally Simulation Runner. It allows us to do really just a scale out of an existing, some existing sort of contained application. This could be Python. It could be an XE. It might be the Matlab runtime or some other program that you've got and the way that this works is, and you can ew and ah at the animation if you like. This is Wen Ming’s handiwork. It looks hideous from here, I have to say. All know. It looks good from here, but standing right there, everything’s out of shape. What Wen Ming has built is effectively a portal from which you can define the nature of a parametric sweep that you wish to do. You have on your client, or end-user, the ability to point it at a particular application which could be in

Dropbox. It might be on some local storage or it could be on some Azure storage. The portal effectively defines the queue. The queue then gets cranked through and then sort of delivers.

You can monitor the activities and you'll see the status, effectively the status will be workload.

It's a relatively simple infrastructure, but we know it's a very common request for the sort of pleasingly parallel, some of the pleasingly parallel workload.

>>: That's half the jobs.

>> Daron Green: That's half the jobs. You can understand that it's a very common request, so rather than have everybody instantiate the same kind of script, what can we do to help?

>>: In 1988 that was 20 percent.

>> Daron Green: Yeah. Okay. Look at the animation. All right. Again, what we're looking for is some early adopters. I call it embarrassingly parallel; pleasingly parallel is a little bit more sophisticated. As somebody who used to parallelize CFD, computational chemistry codes, finite element code, these sorts of things, I would get embarrassed about claiming that I parallelized them.

>>: [indiscernible]

>> Daron Green: You probably already know the sort of workload, but, you know, its image processing. We've got a vast array of images to analyze. There's some design optimization where you are changing some of the parameters of the design and want to go through maybe a full simulation or optimization on each one. Again, WenMing.msdn.live.com, slightly different title though, Simulation Runner Pilot. The previous one was Matlab. You could put Simulation

Runner and you can see a little bit of the screen cap of the Simulation Runner itself. Some of the things that we have been, that have been requested of us are technical examples, design patents, these sorts of things that are relevant to research. If you remember I said that a lot of the previous examples were great if you're in business, so what have you got for researchers?

We've got technical papers. Other people have been asking, okay. That's fine. How do I get data in? I've got these devices or I want to build devices and I want to spray data up into the cloud. How can I do that? We've been working on these too. If you were to go online@Azureforresearch.com right now, you will see that we have technical papers on each of these topics, so there's some early overview just get people started and then we get into a little bit more detail and a little bit more sophisticated as we go through. So HPC and technical computing, scaling a cloud service, installing blast using things like the service bus and Python.

If there are gaps, if you say that's all very well, but, you know, I really need to do this. You don't have anything that explains how, you know, how I achieve this interaction. Let us know. Okay?

These are ones that we thought just from the initial feedback made sense, but it's the little things that you think we should be doing. Please give us the feedback and we'll…

>>: So how do I make sort of an SDN-enabled part of either the IU or my research group’s computer infrastructure? That would be, I think, a broadly useful white paper.

>> Daron Green: Yeah. How you scale out from on premises.

>>: I could have you as the cloud busting or whatever you want to call it, be the provider and have it conveniently linked and using these remarks that that fellow talk today about. How SDN and IBMA things like [indiscernible] interesting ideas.

>> Daron Green: Yep. Did you here, folks at the back? I know you know, but just in case they want to comment. Geoffrey's request was could we have one that explains how to burst out from local premise, you know, on the university campus or in the department open to the cloud. Other things that we can do. We can also give technical help if there are architectural issues that you have or if there are particular code samples that would be useful, please let us know. I mentioned the simulation with the early adopters or beta testers. If you're having technical problems then we have a support e-mail that you can use, so this is where we have a third party that will help solve technical problems. We work alongside them. They just help us deal with some volume issues. That's a useful e-mail to sort of note. It's not@Microsoft.com.

This is a third party that we have commissioned to help us answer some of the sort of technical problems. If you do plan on putting a proposal in, feel free to contact the sort of central contact team on the web. If you've got a preferred member of the team here, please feel free to e-mail them and they will work with you on helping tune the proposal. Okay? What I want to mention now is a conversation that I have relatively frequently about how you can build your own devices and get them to communicate with the cloud, and this is going to bring me back to where I started out a little bit. I wanted introduced you to something called .NET Gadgeteer and there's a little box of tricks that I've got back here, which is actually one of the .NET

Gadgeteer kits. Just to give you a sense, those look very big when they are projected on there, so this board here is this. What we found this is predominantly driven by the researchers in

Microsoft Research Cambridge who were trying to innovate by constructing lots of different devices and they built things like cameras that would take photographs intimately and record the data to record a day in the life of, well a day in the life of somebody that maybe has memory problems. And by reviewing information about the day at the end of the day, it would refresh their memory such that they could actually recall things over a long time period much more successfully. This was Steve Hodges and his research team and myself at Research

Cambridge. That was one device and then the other devices that they wanted to maybe do is a type of sensing in the home setting. They realized while having to create from scratch pretty much the sort of base components and they said look. This is silly. Let's go to some of the hardware manufacturers and get them to build boards which we can just plug the devices into.

That's what spawned the Gadgeteer project. It's programmed from Visual Studio using the

.NET micro framework. You can program individual basic. You can program in C# if wish. It doesn't look particularly elegant. That's the same board here with a range of different devices clustered around it. This is a little CD touch screen. This is an SD card holder. There's an LED.

The buttons are a joystick, there a camera. There's an ethernet port there; that's how you power the thing. It can be a little battery pack or it can go into a USB. And really, it's an incredibly flexible way of constructing sensing. Temperature sensors are kind of standard issue for these things, temperature, humidity. You can have soil humidity sensors. There are hundreds of different devices now that have been produced with this. The sort of thing that you can go from having nothing, the kit in the form that it is here just in the box, to something that is working within just a couple of hours. There's no soldering required. You just plug it together. You can get Bluetooth interfaces. You can get, what else can you get? We've got Wi-

Fi connectivity, so you could have something relatively simple, low power. You can get GPRS as well. Yeah, so it will connect to the cell phone network. Relatively simple, easy to construct and deploy and then send information back to you or back up into the cloud. I won't go, I've mentioned a lot of the stuff that's on here. This is just one example. I built this, well, Visual

Studio built this. This is me constructing an FM radio. This is the FM radio module. I then put on an infrared receiver and have a little key fob which is here that I used for tuning it. I didn't bother putting on, though I did put the display on it, actually, so that's the back of the display.

By pressing up and down you can tune the radio. On the display, is will tell you which radio station you are listening to, which soundtrack. You can see the volume. This is it actually constructed. You can see it's listening to channel 94.9. This is actually in a hotel in Moscow and

I went over and was building this on the flight, which brings me to a note of caution [laughter] about the way that the crew of airlines view that sort of electronics. [laughter]. And they get rather anxious because they don't know what it's doing and you look very suspicious and there's a conflict going on in Ukraine and they start talking about strapping your hands to the chair. Right. But other than in-flight, it's a really good idea to get one of these kits and start programming it and playing around, which brings me back to these guys. You can guess what they want to do, perhaps. They want to take one of these kits, work with that group of people on the top right and reproduce something that Kenji’s [phonetic] colleagues did and Kenji was part of. In this case this is a Windows mobile phone sent up on a weather balloon so it said 80,

90,000 feet, something of that magnitude. Kenji very gracefully came and gave a presentation to these Boy Scouts and sort of inspired them. This won't be sending back data to Azure because there's no network up there that they are going to sort of be sort of in communication

with, but it sort of brings us full circle onto the sort of ways in which people can now build devices, and they can be recording scientific data as this thing goes up. They are going to be analyzing it when it gets back, but we're just at the start. It's now a commodity electronics that you can use to do scientific data collection and these guys have access to the cloud as well. This is just one sensor that they are going to be sending up, but you can bet your life that within six or nine months they are going to be thinking about hey. What else could we be doing? They've already been talking about what if they have a few more kits and packaged them up and threw them into Puget Sound, and let them sort of float up and down. They can look at the patterns.

They will get signals back every so often if they can see a network, because the Wi-Fi constantly scans for open networks, attached to a network, punch some data back. That then gets you to the problem of okay. How are you going to get all this data back from multiple devices?

There's a service bus infrastructure that we have within Azure but I want to talk to a little bit about another project that we have for managing -- you hear about the internet of things. We have a project called a Lab-of-Things and I'm going to play you a short video that explains that, and hopefully the volume is going to be high enough. You may need to be quiet.

[video begins] [music]

>>: Lab-of-Things is a flexible platform for research that uses connected devices. Meet Albert, a researcher who builds and deploys connected devices into people's homes. Researchers like

Albert work in many different domains including healthcare, energy management and home automation. When Albert wants to conduct a research study, he needs an easy way to deploy his connected devices into a large number of houses, ideally, geographically dispersed around the world. To simplify deploying sensors and devices into homes, Albert installs a home hub, a

Windows computer running the HomeOS software. The home hub supports discovery and set up of several different types of devices, including Z wave devices, IP cameras and custom devices built using Microsoft.net Gadgeteer. Albert can extend HomeOS to work with other devices as needed by writing simple drivers. For example, if Albert wants to study a new sensor he invented that detects when people fall in the house, he writes a driver and then a small application to connect to the Lab-of-Things platform. He can then easily store data in the cloud, remotely update his study if he finds a problem and monitor the status of his deployments across many different homes. Marie is another scientist studying aging and place. Like Albert, she uses Lab-of-Things to deploy sensors in many homes that she can easily monitor. Marie realizes that she and Albert have recruited similar participants in different locations. Because they are both using Lab-of-Things, Albert can deploy his study into houses Marie has recruited and Marie can deploy her study into some of Albert's houses. Lab-of-Things enables researchers to share data, code and participants lowering the barrier to evaluating ideas in a diverse range of settings. [video ends]

>> Daron Green: So the one caveat I would make to that is it sort of implies that you're deploying globally and you're collecting data over national boundaries. I'm not quite sure that there's a lot of that happening, but just from a very common scenario as one in which people wish to deploy sensors in houses in a, you know, a particular cohort, another part of maybe the

U.S. people also want to deploy and then you realize that you've recruited essentially the same

type of people and you want to be able to increase your number of locations. If you set it up using the Lab-of-Things and be sort of software fabric that we've caught, you can easily deploy and monitor. It's a very, very common problem that we've seen that people without this sort of middle ware have had to sort of regenerate or recode, create drivers to maintain a whole lot of software. So if you have an infrastructure, or if you have an intent to deploy sensors anywhere, then have a look at the Lab-of-Things. It will really accelerate you through that process. All right. Let me just get back. You can obviously type Lab-of-Things into your favorite search engine or Bing [laughter] or you can have a look at one of these web links. If you've got any comments or any questions then just contact Lab-of-Things@Microsoft.com and it will be routed into our research team. Okay. To just give you a summary, I haven't really labeled this because I think I would be preaching to the choir somewhat. The cloud is, without doubt for me at least, a genuine paradigm shift. I've added education here just from when I look at the way in which it's transformed what my kids can do and now how they are talking to pull their friends about tackling assignments that they are given. It's genuinely transformed the sort of research capabilities and I would argue that for those that wish to it can actually provide a new economic model for the way in which research is funded. Not for everything, not for all data, but it gives the opportunity for researchers to make available those capabilities in the marketplace in a way that maybe wasn't so easy before. We are offering Azure for research training. Some of you have taken it. Some of you don't need it. If you think that there are people that do need it or you want to post some of the training, just let us know. The researcher awards are a great way of you not spending any money and getting access to a lot of compute infrastructure. There are the open invitations every two months. We run these special calls as well as we are doing for Matlab. We've got a lot of other technical resources, the papers that I mentioned, Simulation Runner, Gadgeteer and Lab-of-Things. If you didn't want to write anything else down then, you know, there's sort of a catchall for everything is

Azure4Research.com. That's where you'll find all of this and lots more. Okay? With that I'll just say thank you and give you ten minutes back. [applause]. Thank you.

>>: Are there any questions?

>>: If you still don't want to write anything down we have postcards with the written website.

[laughter].

>>: I was wondering for your research awards, is there, what is the threshold? If I wanted to explore using Azure as a data processing platform versus my own Hadoop cluster versus, you know, Amazon, but I don't really have a defined research goal that's going to produce like a paper at the end of it. More like, you know, see if this is a computing paradigm and if you guys are offering better things than somebody else, is that within sort of the thresholds, and also, is there a limitation on there must be a professor or a department or can it be staff researchers?

Can PhD students get awards, but then they probably don't need 180,000 hours. They probably need 5000 or something.

>> Daron Green: Yeah. To justify an award, there needs to be something in there, not just I feel like playing around with Visual Studio, maybe. You're going to put some workload on

there, so you should just indicate what that is. There are a lot of folks that are doing some pretty good work on here's Hadoop running on Amazon, here's Hadoop running on these other platforms, and we actually have a project with Barcelona Supercomputing Center and without going into the real detail of actually for this hardware, for this configuration, for this interconnect, this is the way for this type of workload you want to put the dials, because

Hadoop has many, many different configuration points. So there may be something that you suggest where we go actually that's Hadoop over the work that we've got. You are less likely to do that if there is something specific that you're -- I like to evaluate putting this type of workload on, okay? That's okay. It doesn't have to be a tenured professor. It has to be someone in the University. If it's a PhD student or a postdoc, that's awesome. That's really good, and they get this as an award, that's actually going to look really good on their CV

[phonetic], the fact that they managed to justify this. We would definitely encourage people within the research groups to sort of come forward. Typically you want the slightly older researcher that has maybe been around little bit. He or she knows how to program. [laughter].

Not that old. [laughter]. You want somebody that isn't going to sort of fumble around for the first six months, and then realize I only got six months left of the award. I mean, we can always extend it; it's not a problem. We have training passes that we use for people to take the class which don't last the full year and they are not as big in terms of footprint, but if you think there's a chance that you would want to scale up at some point, I would put in for the bigger award. If they don't use it it's not a problem. We over allocate. We were given this much from the Azure business and we're probably going to give away a lot more than that, on the basis that not everybody uses everything. Question over there?

>>: I have a colleague at SDSC who teaches a Matlab course and I want to send her an e-mail that maybe she should approach you on the Matlab pilot thing that you have. The one question

I have is she [indiscernible] do you have services that are set up in Azure for [indiscernible]

>> Daron Green: Yes, so…

>>: If you were going to say no I was going to ask if you were open to suggestions on what should be done.

>> Daron Green: Let me tell you what we've got in terms of the educator passes, because they are a little bit different. There's some specific things called educator passes that we use for people who are teaching a class where the faculty member wants to issue a number of passes to students for a particular course for a particular semester, and they want to be able to go in and see whether they are using them, so we have passes like that have been issued. I'm not sure if we're still issuing at the moment but the mechanism is there, okay? There are some funny things that go on though, because some people use them in a very responsible way and some people are a bit less responsible, like they will spin up a Hadoop cluster and then let it keep running.

>>: That was going to be my other question because there might be [indiscernible] in there and

[indiscernible]

>> Daron Green: Yeah, we just could beat people up. That's what we do which I'm very comfortable with, but then I'm from England and that's what we do. [laughter].

>>: This is more of a suggestion. At Azure you can sign up for free but for only 30 days. There is another company across town that you can sign up for free for one year, and these are feedback from students. They get busy. They get homework [indiscernible]. So is there any possibility to make that learning period to be longer, maybe 60 days or 90 days so they have a chance to learn and then after they learn we can maybe give them a research account.

>> Daron Green: We're providing feedback to the Azure team. We don't own those passes; that's the Azure team and we're also cognizant of the limitations for those individual passes.

There are other forms of passes that give them longer access. I can't remember, but do you have to put your credit card in for that other company?

>>: I can't remember.

>> Daron Green: I think you probably do.

>>: I think so. You do. And then if you go above the cap you get charged. The Azure one when you hit the limit you have to actively on and lock it in order for you to be charged for it.

>>: So that's [indiscernible]. They don't know how to get into and they have only 30 days and it's gone. Just wonder maybe it's going to be 60 days, so there are few excuses for

[indiscernible]

>> Daron Green: And in part that's why for a lot of the processes that were set up by another part of our business, not that I particularly want to expose the workings of Microsoft to you.

That's one of the reasons why they went with the faculty model where they would grant it to the faculty who would then issue the passes and those were, I mean they are small, but they are still pretty meaty. They go well beyond work that a competitor might have been offering.

>>: Our training passes are good for six months, no credit card.

>> Daron Green: The training passes that we issue are six months.

>>: Good to know.

>> Daron Green: But it's great feedback and yes, it's one we've heard before, but it's good to have it reinforced.

>>: I was wondering about the hardware thing that you showed. That was really cool. And I could see people building that and maybe integrate it in with like the Lab-of-Things and I have sort of two questions. Where do we find out more information about it? And the second

question is can you sort of build the same Lab-of-Things infrastructure without having that requirement of the HomeOS PC there, because that's a much bigger device, costly device, whereas these other smaller things you want to throw out there and connect them and have them push data to Azure or something and then analyze it, that could very easily scale a lot more than like a PC somewhere that is doing data aggregation.

>> Daron Green: The HomeOS piece of the Lab-of-Things is a requirement for Lab-of-Things. It acts as that aggregation point. It allows you to update. It allows you to control in a more sophisticated way the behaviors of the infrastructure in the way that it is deployed in the setting. We went around that design decision and it's a very clear requirement that you want to be able to update drivers. You want to be able to update and you can't do all of that if everything is remote and then you just got these very dumb clients. If, however, you say no. I don't want that. I won't have this sort of low call aggregation point, the HomeOS. I don't want that. You can use Gadgeteer or other equipment that you want to deploy and use the service bus infrastructure that there is natively in Azure. On that point, we sat down, again, it was the first training that we did. It was down in Brazil and one of the grant applicants, one of our award applicants had just put in a huge request to a funding agency in Brazil for a whole lot of infrastructure that they were going to build out. We sat with him the day after we had run the training and he was grinning. He was just grinning, because he was pretty sure he was going to get his application approved from the funding agency. We just save him a year because of the service bus infrastructure. He said I was going to build all of that. I'm going to get the money for it, but I don't need to spend it on that because you have done it. All of the message queuing that you want handled in an elegant way we have already got as part of the service bus because we have lots of businesses. This is where you benefit from that business use of people programming mobile phone applications that then need to send messages, asynchronous that we open to the cloud, have something done and then something sent back out. All of that is already there. You can just piggyback on that for your scientific applications. You don't have to use Lab-of-Things. Lab-of-Things is just a more sophisticated way of handling certain types of workload. Typically, clinical people want to deploy and then I just what they're collecting in a home setting.

>>: Well thank you.

>> Daron Green: Great. Thank you. [applause]

Download