>>: Work here in Microsoft Research. We have... Earth Summit that's called the Location Summit that we've run...

advertisement
>>: Work here in Microsoft Research. We have an internal version of the Virtual
Earth Summit that's called the Location Summit that we've run the past two years
and it's mostly just internal Microsoft people talking about location projects that
they've got going on here, it's just to kind of get us all on the same page, so that
the company kind of knows what's going on. And so part of the Location Summit
we're combining with the Virtual Earth Summit, and that's what this next section
of talks is, they're short little mostly ten minute talks just to give an idea, basic
idea of what's going on in the company.
Let's see. And if you're here for that, and you can't find a place to sit, there are
more chairs over in the corner. So our first speaker is Steve Stanzel from
Microsoft Virtual Earth and he's going to talk about Virtual Earth 3D Cities V.2.
>> Steve Stanzel: Okay. Thank you. So is my mike on? Can everybody hear
me? It doesn't sound like it. So can someone help me get my mike on? Is that
better? Okay. Excellent. So I'm Steve Stanzel. Unlike many of the
presentations you saw yesterday and a little bit this morning, I'm not here to
present my own work, I'm here to present the work of literally hundreds of people.
And if I look around the room, I think about Franz and his founding of XL and
decades of research and imagery and 3D reconstruction and camera and sensor
design. I think about people from V team here with Lauren and Duncan and
Jason and Conrad. There's a bunch of people here in the room that actually
represent some of the work that I'm going to show.
And I was asked to say what are we doing in Virtual Earth around 3D and how
are we using imagery to recreate a 3D world and do that all in 10 minutes. And a
little bit of impossible task but I think I decided it's a visual tour.
So what I want to give you is just a little bit of a flavor for what we have currently
in Virtual Earth and maybe a few teasers of what we have coming up. So the
best way for me to do this is to let you know a little bit about what we have live in
the Web today. And basically we've been at 3D data on the Web through Virtual
Earth since November of 2006. We have about 300 cities live on the Web today,
what we call internally our V1 cities, and this month we launched our first four V2
cities, and I'm going to talk to you a little bit about how we create those V2 cities.
And fortunately the marketing team actually put together quick video, so I'm
going to happen to reuse this, hopefully no one has seen it too many times. It
gives you a little bit of an video of what we have on the Web today. So this is all
live data.
(Music playing)
>> Steve Stanzel: All right. What do you think? Like it? So that's a lot of blood,
sweat and tears for us.
(Applause)
Whoops. We don't want to watch that all ten minutes, do you. Although I could,
maybe the rest of you wouldn't want to. So like I said, I represent a bunch of
different teams. I come from the team in Bolder, Colorado. We call ourselves
3DI. We create both some of the 2D imagery and all the 3D imagery for Virtual
Earth. The Redmond VE team is clearly very instrumental you'll hear a bunch of
talks over the next few days from that team and then also there's some
individuals here from the Grats (phonetic) team that do some of the core
research. So what I just showed was a little bit of very quick visual tour of what
we call our V2 cities. This is a quick explanation of how we think about that. In
the last year and a half we've moved from the ability to create a city from this
aerial imagery to this increased realism that we have in our next step and it's just
the next step. We see many steps kind of going forward.
So we currently have building geometry because of the limitations of bandwidth
to the clients and performance of the GPU and other things that we're doing in
two NFD half buildings just straight two NFD facades. We're using textures in the
past just from aerial cameras that the Vexel (phonetic) team has created, and in
the future we're moving forward with the combination of aerial cameras for
geometry and oblique cameras for texturing, and that's why you see some of the
greater realism in the four cities that are live on the Web today.
We had building density in the past. If you've looked at Virtual Earth in the past
prior to two weeks ago, we had building density of two to five, 6,000 buildings per
city, and for those first 300 cities and now we're getting in the neighborhood of
100 to 150,000 buildings. And so because of the great, would that went on with
Franz and Conrad and Jason and others we have the ability to automatically
reconstruct these buildings in a way that is getting us closer to true realism
And the ortho's have gone from a traditional mapping ortho to true ortho's with
the ability to do all the geometry. We're able to turn our ortho's into true ortho's
standing all the buildings up exactly on top of the footprints.
And we're actually as a byproduct of the need to recreate all these cities, we
have the ability to get a very high resolution one meter DEM. And on top of that,
we're adding trees and other nice little things for visualization you saw as a part
of that.
I am going to give a very, very quick demo. Do I have just three minutes left, is
that right? We must have started late. Five. Great. So I'm going to five a very
quick demo -- how many people have used Virtual Earth in 3D? As I expected,
many, many people. How many people have seen the four new cities that we
have? We have Vegas, Denver, Phoenix. Excellent.
This one is Denver. And just a very quick tour. I encourage everyone to go first
get an xBox controller and try this yourselves as well as come in and get a
chance to fly around in some of our V2 cities. So this is a little bit of Denver.
And I realize I'm very pressed for time, so all I'm going to do is come in and show
you a little bit of what our realism looks like. This is an automatic reconstruction
of a roller coaster. Wouldn't probably meet the needs of some roller coaster
buffs, but the fact that we can get something like this automated out of our
system is just an amazing I think addition to what we're going to be able to do in
the future from Virtual Earth standpoint.
I am going to give you a whirlwind tour here of how we do this. So let's see. We
talked about V1, V.2, this is the V1 cities, V2 cities. Details have been through
that. Here's what Vegas looked like early on. And this is a little bit of what Vegas
is starting to look like now at the street level. So you see we've gone from great
visualization from the air to getting to better and better visualization from the
ground. It's incredibly important for us, we believe that the world needs a Virtual
Earth framework to place all of the location information long term. You've soon a
ton of great innovation from this group, and we certainly hope that this is a great
backdrop and framework for things in the future.
Here's another example of some of the -- if you the fly down into Vegas you'll see
3D buildings likes this. Some things that are familiar if you've ever been there
start to really jump out. Here's a quick example of true ortho's and you'll see that
the buildings are standing exactly on their footprint. That's an important thing for
us to improve the overall realism. Example Miami we actually haven't released
yet, but we're getting better and better textures there.
The DEM that I mentioned is giving us an example of we're getting to the point
that we can do real terrain including water and riverbanks. We get to the point
that we actually could probably do virtual golf and carve out on the bunkers, et
cetera. And like I mentioned the building density is actually coming from the
suburbs. So we're at the point now where we can model the one and two story
buildings. These aren't perfect models, it's a great framework for us to go in,
whether it's a combination of Photosynth or other image technologies, let the
community come in and contribute. In the future who knows where we're doing.
This is an example of some work that Conrad's team has done over the last
couple of years and Grats. We do have the belief that anywhere I want to go in
the world I should be able to get a realistic framework and know what it's like to
be there, whether I'm familiar with it or not.
I'm going to do this in two minutes. This is the four basic steps that I'm just going
to fly through some visuals, right? We acquired imagery from the air as well as
from the ground. We have to orient that imagery, obviously we have to
understand it and evaluate and classify that imagery and then we go on to
reconstruct the buildings. So it's a huge problem. How do we do this? We
actually have a grand aspiration that we think that all the major cities in the world
should be modelled long time, and that's what Microsoft's pursuing.
We just announced a couple weeks ago 15 petabyte storage and data center.
That's because we do believe that we're going to get to the point that we have
thousands of cities models in Virtual Earth. So in two minutes we use the ultra
cam. It has the ability with these sensors to gives you great information from the
air. Zooming all the way in, people love to look at the quality of this imagery.
This is what the image looks like on the left and the ability to resolve on the right.
We have to go through the orientation.
This is a jumble example of what we do for the oblique images, taking this,
matching it and getting to the point that we can actually use that for texturing.
Through the bundle, this is an example of Boston after we've taken all of our
aerial imagery and all of the type one that ties that all together. Gets us to the
point that we can start to do understanding of that imagery, through a
combination of the pan as well as the color imagery. We evaluate that, we do
what we call classification, identify water, solids, vegetations, refine that
classification to identify the buildings and other impervious surfaces and trees
and get to the point that we can actually start to reconstruct the world.
So in one minutes here's how we create the buildings. It's in the top right corner
we're doing dense matching to basically determine geometry from all that
imagery. In this corner here you have the traditional ortho's that we started with.
We have textured buildings, maps of both trees in this case but also buildings.
And flying through that in one minute we have a dense surface matching that
gives us this incredible resolution. If you don't find yourself running on this, this is
buildings including bumps for cars, you can start to see trees, you can see
fences, there's an incredible amount of detail here that allows us to go from -- this
is a view of space and geometry, erasing all the trees and buildings and then
creating the 3D models from that.
We also identify the trees and then create true ortho's. So that's my quick 10
minute tour of how to create the world in 3D. And I think I'm supposed to ask for
one question. So does anybody have any questions? (Laughter).
No. Okay. That's it.
(Applause)
>>: Well you said one question, so I'll ask the question.
>> Steve Stanzel: Okay.
>>: How are we doing compared to our competitors in this area, would you say?
>> Steve Stanzel: There are very few competitors that can afford to even
compete, and it's a massive project as I briefly mentioned that we're very serious
about acquiring not only the U.S. cities which you see predominantly today but
also in Europe and Asia, and if you look at it from a coverage count or a building
count or a building quality, we're far superior to Google who is rapidly trying to
catch up as you noticed a couple weeks ago with their Google 4.3, so -- okay?
>>: Thanks.
(Applause)
>>: So thanks, Steve Stanzel for going so fast through I appreciate it so we stay
on time.
>> Steve Shafer: How do I do a slide show in this version of Power Point? All
right. Thanks.
>>: Our next speaker is Steve Shafer, who is a principal researcher here at
Microsoft Research.
>> Steve Shafer: Thanks. So is my mike on? Okay. I'm going to be talking
about the modeling that I'm doing inside of buildings. You've just seen the
outside of buildings. So what I do looks at the inside of buildings and is not
oriented towards what you see in a lot of the research literature which is very
flashy, high realism 3D and so on, what I'm looking at is the industrial grind of
how do we take a facility like Microsoft with 400 properties and turn that into a
computational model, which is not going to be a high res, 3D art on the walls kind
of model, it's going to be something that looks a lot more gritty than that but has a
lot of practical business value.
And there's a lot more on the slide than I'm really going to talk about, because I
thought that people might if interested look at the Power Point and see. But
here's an example scenario where we have some rescuers who are coming to a
building, so there's map issues about getting into the building and federating from
the Virtual Earth map down into a private map. You have a private map starting
at the campus level, and then the building level and then within the building. And
there might be different routes that you follow depending on different credentials
or capabilities. Somebody pointed out if you have an axe that also will get you in
this door.
And when you get to the building and then you get to the elevator and then you
want a path that goes up to the elevator and you want to be able to overlay data,
so here the A, B, C of the rescuers might be coming from the rescuer's beacon
system which is a system that's totally independent from the building model that
the building operator has, and but somehow you want to put those two together
in realtime and then also give the rescue commander the opportunity to look at
sensor data, maybe pop up a live image for video camera or temperature reading
or something, to drive that building model interactively just as the building
operator would do. So there's a lot of map federation across different maps as
well as different systems that you want to do for that scenario.
The deployment model for the facility map framework goes something like this.
You start with your CAD drawings and what I do is I start with Visio, which has
the ability to import from AutoCAD, everybody can export to AutoCAD, so we
import that into Visio, and then I have an editor in Visio that you use to attach
metadata to the drawing stuff that you have there in Visio. And there's more
steps up there. If you look at Microsoft there's actually a step before that Visio
step where they have what's called a CAFMS a Computer Assisted Facility
Management System, and then I take the output from that actually.
But anyway, you annotate it with this editor, and that creates a Visio drawing that
stores all the metadata, and one of the abilities is to click a command and export
that into an XML Schema, so you can have a XML file that corresponds to that
Visio file, and you probably have a bunch of those for a given enterprise facility.
Then if you have an asset database you could either take a snapshot of that of
the XML or you could for example write a Web service to wrap around it to
provide access to that data, and if you have realtime sensors you probably
surface those through a Web service or perhaps a SQL model, I don't have a
SQL piece built yet, I do have a Web service piece, and for a small organization
that might be all you do, for a larger organization you have to worry about
deployment and versioning and things like that.
So here's a little screen shot of the editor. This is a room detail where we've got
a bunch of computer racks and this is actually an architecture mark that means
there's a cross-section, so for each of these computer racks you could swing
down and see it in the front view, and this block here indicates a front view of the
whole bank of computer racks. And then there's routes that were drawn by
dragging in shapes from over here and adding the metadata. Here's what one of
the metadata windows looks like so the route pieces are added here, here's a
little connector to connect out to the floor plan for that floor of the building.
And then to build an application that consumes all that, there are map providers,
and this is where it's like a metropolis drawing as if this is a different system that
accomplishes a lot of the same purposes in a very different domain. So here's
an XML provider, I have the Web service provider, I haven't done an FTP client
or SQL client yet, and then there's the object model that gets populated and then
there's a set of engines that sit on top of that so all of those things are exposed to
the application so it can drive any of those functions.
And by the way, there's also a UI piece so that on if the application wants to use
my UI, they can, and the other bits and pieces to make a whole system.
So here is a little screen shot from the UI testing program that can get the map
from different sources, and here's little building map, there's some people dat
overlaid on top of it. That data actually comes from a spreadsheet and gets read
in and converted into the format internally. And then this right click menu that
has a lot of options for zooming and showing the inner and outer maps and going
up and down floors and overlaying things, that's built into the user interface piece
in the SDK, so there's actually no application code here at all, the application just
says take this thing and point it at a file and the rest is all built into the package.
And then this is a tree view that mimics what's happening there so as you add
overlays, they get added into the tree view and you can click on things that say,
you know, show me where this one is and so on. So there's a bunch of features
that are shown in the right click menu, some things not shown as the
cross-section maps going up and down floors. If you shrink the thing down for a
mobile device, then what happens is the detail goes away, the icons go away,
the labels go away, you see just the room outlines, but where there are people
the room gets colored green so you can see that that room has people in it, you
just don't see all the detail. And if you click on that room, then you get to see
what's inside that room.
So there's actually a different format for small devices. And I should say for small
images. So if you're showing it on a mobile device, you would start with that
small view, and if someone wanted they could zoom in and see this view. If
you're showing on a big device, then you probably would start with this view.
And by the way, if you make your window small, it will shrink into that. Because
the description of those different views is in the data model, it's not in the
application itself.
All of the system is designed with regard to what I call promiscuous applications.
That's a term sometimes used in ubiquitous computing to mean that you have a
piece that you carry with you that can mate with many different systems as you
go travelling around.
And in the case of location based applications, again public maps or outdoor
maps you don't have to worry about this, there's one external world, but when
you start talking about interacting with private location data, there's a different
private location data for every place that you go. And so somehow you want one
system that can mate with all of them if you want to build an app that you can
carry with you, otherwise you have to use dedicated applications that are
different for every different location that you visit. So a good example is MS
Space, which works fine at Microsoft where all the data has been prepared for
MS Space, but you couldn't use MS Space anywhere else. If you go to Boeing
MS Space is meaningless because the data isn't in that format.
So the idea of promiscuous applications is an application that you can run more
or less anywhere, and the key to that is having some standardization, and I hope
that ultimately the work that I'm doing will point the way towards that.
And an example of that, if we're going to build something as an add-in for
Outlook, so there's the calendar and there's an appointment, appointment B and
here's a room that it's in, it's in a well-known facility there called the sampler,
which is my little test data, and there's a room number. So you could imagine
that your location would be Microsoft:115/2186, and that's enough information.
Somewhere on your system is a thing that says Microsoft and gives the pointer to
the Microsoft file, and then the rest of that string can be interpreted in that file.
And so let's say you're collaborating with somebody at Boeing, you could have
Boeing as another location, Boeing colon whatever will get sent over there. And
so then this little thing here, you hit that button up there called go planner, and
this things pops up. So this is a window that shows where this appointment is,
and then it has tabs here that travel from the previous one to here and the travel
from here to the next one. So this is a very reminiscent of the Leo thing that was
build here where it was a Virtual Earth portal for Outlook as an add-in. And so it's
the same kind of idea but it goes inside the building and when I'm done with it, it
will be able to do things like show you a route that goes through this campus, you
know through this building and campus on to the street network through Virtual
Earth and then on to other campus and into the building there.
And then you can change where the appointment is. At some point I'll be
building something here to help you pick the conference room, and there can be
reasoning here because there's a whole path planning system behind it, you can
reasoning about what are the conference rooms that are closest to the
participants, and not just closest to their offices but closest to where their
previous meetings are so that we can actually plan the travel at the same time
that you're picking the conference room for the meeting. That's assuming you
have more than one choice, which as we know is not that common around
Microsoft. But anyway, this is what you get if you click on that tab that says show
me the travel from my previous appointment, so there we are, we just had to go
down the hall. But anyway, we'll do at some point some time modeling and
others things in there.
So that's the idea of the facility map -- here we go -- of the facility map
framework. Some of the things that are on schedule for the future, more work on
map registration. I have a new path planner algorithm, a little variation of A star
that I think is more suitable for doing buildings on a campus and not looking at all
the buildings you don't actually need to look in, building the bridge to Virtual
Earth, the mobile client, doing the realtime update. It's not there yet. The Web
services are there but not the realtime update of the map. And then building
shelves for various existing components. And then deploying the go planner.
We'd like to do that.
And then with real estate and facilities I'm doing work on how we actually take a
Microsoft building and campus and build the model. It's very challenging. The
Atlas project is to build a model of all the 400 properties at Microsoft. Then we
need shelves for the RFID server and other things. And these are some
standards that are out there. And eventually we'll build bridges to all those.
So that's the facility map framework and, you know, the idea is to sort of show
the way. There's some interest from our health care group, basically anybody
that I talk to that works with modeling the insides of buildings is interested, even
though it's not a high-tech 3D whatever, whatever, but it still is a very useful
system, and we'll see where it goes.
(Applause)
>>: We have time for one question. If there happens to be one. Okay.
>>: What do you think about using a PDA instead of a notebook to -- sorry.
What do you think about using a PDA instead of a notebook to draw the map on?
>> Steve Shafer: You mean to draw the map or to see where you're going next?
>>: Yes. I mean in your tours you are using a visual to create those objects, but
in the real-world scenario you are going into a building and you may not know
to -- you may not have the floor plan, so you may go around the building and do
the drawing some objects.
>> Steve Shafer: So let me see if I understand two scenarios here. I think this is
a very important point. Okay. Scenario one says that you walk into a building,
you know nothing, so you're going to walk all around that building presumably
drawing a floor plan as you go.
Okay. Scenario 2 says there was an architect who built this building, they had
CAD drawings, the building creators advertised a version of that data which you
through your enter 2.11 connection can pick up, so when you walk in the door of
that building you see the map right there. Okay. Which of those scenarios do
you think -- yeah?
>>: Number one scenario.
>> Steve Shafer: You think that? Okay. I'm less interested in that scenario.
The applications that I've looked at are almost all taking place in commercial
buildings and enterprises where CAD models exist and where there's a
tremendous amount of leverage to be gained by using that already existing data.
I realize it's possible to get that data through sensors as you wander around or
through people making sketches, but I am not a believer that that's going to be a
practical solution for the commercial buildings. For your home I could see doing
that, but probably not for any sizeable building.
>>: Thank you, Steve.
(Applause)
>>: All right. Our next speaker is from Virtual Earth, Duncan Lawler, who is
going to talk about the Virtual Earth 3D Control.
>> Duncan Lawler: All right. Is my mike on? Good. Okay. So I'm Duncan
Lawler. I work on the team responsible for creating the run-time control that
displays all the cool data that Steve was talking about. So my job is to take all
that great data and get it down as fast as possible, although my boss tells me it's
never fast enough.
So at the beginning of the project we kind of took a different tact. Rather than
building a stand-alone application what we developed the 3D control as is a
interactive component that can be used in any number of applications. So we've
obviously developed an application around that, the Live Maps Website that uses
it, but really can be used in any application that you want.
It's a manage control. People call it an active X but it's not. It runs in both native
apps, Web apps, managed apps, WPF apps. You can pretty much embed it in
any kind of application that you want and then drive the -- the application actually
drives the control and tells it what to render. So it's flexible as we could possibly
make it, but we realize that, well, you know, that may not be flexible enough. I
mean you like the way that we render our 3D data, but you want to add some of
your own data to it or maybe you want to change the way the rendering works or
add some special effect.
So what we did is we created a plug-in model. Now, being managed code we
have something that we can do that other people really can't, and that is we have
managed sandbox that we can execute in. Once a 3D control is installed, what it
can do is it can dynamically download completely untrusted code from the
Internet, it loads it up in what's called an app domain, which is restricted in
permissions and so we know that code can't possibly do anything dangerous.
We expose to that code a set of interfaces that we know that are safe to call and
the code can only call those interfaces. So that means there's no install time
procedure at all, if you're running in Web app you can load up our 3D control in
that Web app and then have your code load up inside our 3D control and do all
kinds of interesting stuff. Well, what kind of stuff can you do? The API breaks
down into three categories, and I'll talk about each of those categories. Data
source plug-ins, rendering plug-ins and controller plug-ins. When it's loaded from
script, it executes in the partial trust environment, as I said. You can also, if you
feel like it, you can install it locally. When you install it locally, you can do a little
bit more, you basically break out of that partial trust sandbox and you have full
access to the system resources.
And although I've grouped these APIs into three categories, any plug-in can call
any one of those categories of API. So the data source you pretty much want to
do a data source when you like the way we render things, you want to add your
own data. And 80 percent of the requests we get from people are hey, I love you
data, I just want to put my own buildings or things into your scene. Data source
API is what you want to use. For a data source what you do is you implement an
abstract interface that we define. All you have to do is override a couple
functions and your data will just start showing up in our scene.
If that's not good enough for you, if you want to do some of your own custom
rendering, we have an API that gives fairly complete access to our rendering
pipeline. It allows you to do all kinds of special affects, animations, whatever you
really want to do.
And of course being part of another application we realized our control model
may not be suitable for all applications so we allow you to override our control
model and change it to do what you want.
So a data source, the interface that you implement for a data source, you can
think of it as a big spacial database. We will issue a query, and I'll pull up a
diagram how we do that. As the virtual camera moves around in the 3D world,
we'll decide, hey, we need to load some data in this particular area, and we'll
query your data source interface and say, hey, data source X, give me back data
that you have in this area. You can return 2D type primitives that will just lie on
the surface, you can return 3D geometry with textures, you can return imagery
that gets draped over the DEM. Whatever you return, we aggregate all of that
and then we create graphics objects out of that and display that to the user. All
this happens asynchronously in the background so as the camera is moving
around it's simultaneously issuing queries, prosing the data and eventually
dumping that into the render thread.
The control manages, you know, all of the memory, so it will query your data
source and get your data back and then keep that in memory for a while and as a
user moves away it will handle throwing that out for you. So you have to do very
little to get your data to display. That was the goal because that's a primary user
scenario.
If, on the other hand, you don't like the way we display date or you want to do
something that we don't do, you can create your own rendering plug-in. The API
it's very similar to any 3D rendering API that you've seen before. It's just we
have to do very, very careful validation of the data before we display it because
access to the graphics hardware it's a fairly privileged operation and don't want
untrusted code necessarily partying on your graphics driver.
But you can examine any of the steps that we do. Our rendering pipeline can be
thought of as a series of steps and each step executes sequentially, puts stuff
into the render queues and the next step can look at what's in the render queues
and pull stuff out. So for example if you didn't look the way our sky looked, you
can yank out our sky step and put in your own. If you wanted to display the
stars, the realistic stars you could you put in a new step that displayed actual star
positions. If you wanted to take our buildings and color them purple, I've seen a
plug-in that does that, you know, it looks at the buildings that we put in the render
queues and I say, no, I want to change some attributes on those.
If you don't like our final render step where we render to the screen, you can
output to an off-screen buffer if you felt like it. So we have a lot of flexibility and
control that you can do here.
For the control model for how things are executed, we came up with a fairly
flexible system that we call the binding system. User interface events are what
we call events and they're defined by our control, so we supply a bunch of user
interface events. Our control also exposes what we call actions that can be
taken based upon those events. And then there's a simple XML file that maps
between those two. So if you don't like our key mappings, actually in the live
control this file exists on your hard drive and you can go edit it and change the
key mappings to whatever you like. A plug-in also has the capability of defining
its own events, so it can be a event source, and its own actions that it can take
based on events. And you can programatically modify those bindings. So based
on that, you can do pretty much anything you want as far as intercepting user
interface events.
So let me give you some real concrete examples of what you can do with this
plug-in model. Has anyone seen the birds-eye in 3D feature in VE 3D? Have
you guys played around with that? If not, play around with it. It's a lot of fun.
It's very similar to Photosynth but based in the real world. What we did is we
took all of the birds-eye imagery we have is kind of geo-registered, we know
where the plane was, where it took the image, and so we know where the ground
footprints of each of those images are. This birds-eye feature in VE 3D was
implemented as a plug-in. So the code that does this is dynamically downloaded
off the Internet, inserts stuff into the render queue. It actually changes your
bindings and user interface at the same time. Most users don't even notice it, but
when you click on one of these images, we completely change how the user
navigates. Rather than navigating relative to the real world they're actually
navigating relative to the point of the image. But it's kind of a seamless change
and users don't even really notice it.
So that's an example of one that does a rendering plug-in coupled with a
bindings plug-in. And so the next one as an example of this is those cool trees
that are supplied there, that's the data source plug-in, so that piece of code got
downloaded dynamically to display that data. The data source -- the advantage
of doing this for ourselves, by the way, is let's say there's a bug in this code, we
can update the plug-in, we don't have to reinstall anything in the user's machine.
So it's a big advantage for us.
The last example that we have in production right now is the community model
plug-in. So when you publish your -- when you're using collections and you're
putting together your 3D models, those are displayed by a data source plug-in as
well.
Now, to publish models to 3D, this is kind of the last part of the API, and it's not
part of the core control itself, but it's potentially interesting to people. We have a
library that allows you to publish 3D models to collections, and that's what the 3D
via tool uses to publish its data up. So that all of this stuff right now it's actually
out there, we've implemented all the codes. If you install a 3D control you have
this on your hard drive already, we just don't have the documentation ready for it
yet.
So that's one of the things we're going to be working on in the next release. We'll
have the documentation available so people can start using this without, you
know, in-depth programming knowledge.
So I'm going to hop over really quick here and show you a couple of real quick
demos of how this stuff works. Let me fire up one of these plug-ins. I know
we've got time for a couple. Lines, data source, geometry. So this is example of
a Web app hosting the control. It's dynamically downloading a plug-in that's
going to be an actor data source. And it's going to start displaying all kinds of
little happy bunnies that start hopping around the globe. There's one over there.
So this was about, I don't know, 20 or 30 lines of code to write this. We've got an
example of -- let's see here. This is an example of the 3D control being host in a
simple win format, so again about 20 or 30 lines of code. You've got it host in the
win format. There it goes. There's all 3D. So basically you can leverage all the
hard work we did to make all this terabytes and terabytes of data available in
your own application. And I'll show one more. Geometry.
I believe this is an example of we're changing the bindings and we're just using
simple 2D geometry to place geometry. So you can add 2D geometry really
simple. All right. I think that's all I had. I'm right about on time, so probably have
time for about one question.
>>: Thank you.
(Applause)
>> Duncan Lawler: Michael?
>>: How abstract is your data -- is your API about the conceptual model of
geometry? In other words, if I know how to retrieve, I have the data source, I
know how to retrieve the data and I know how to render it but it's fractals, they
don't give you triangles or any conventional format?
>> Duncan Lawler: You could do that with -- you could definitely do that, so the
data source will query the data source interface and then we have kind of a
hybrid thing called an actor data source where you can supply the call back of
how to convert the data that you retrieved from the data source to graphic's
objects. So definitely that is possible in the current API.
>>: Thank you.
>>: (Inaudible)
>> Duncan Lawler: Yeah, I don't have that one on this machine on the podium.
>>: Thank you, Duncan.
>> Duncan Lawler: Yup.
(Applause)
>>: So this next talk is a result of a challenge. I sent an email to the Virtual
Earth team and said can anyone over there demo the ten best third-party Virtual
Earth applications in ten minutes. I think that's a hard thing to do. And ->>: It can't be done. We're only getting nine. (Laughter)
>>: Roger Mall sent an email right away after I sent that challenge and said I'll
do it, I'll do it, but then a few days ago he got mysteriously called away to a
business trip so he couldn't be here, so that challenge has been handed off to
Mark Brown, who is also from Virtual Earth, who has been madly trying to get his
demos.
>> Mark Brown: Actually using mesh right now to pull a video I forgot sitting on
my desktop right now. So we'll let that go. So, hi. I'm Mark Brown, product
manager on the Virtual Earth team. My job is twofold. I'm here to help get
developers really excited about Virtual Earth and help evangelize the platform,
get them using it themselves, hopefully instead of Google.
And then the other piece of my role is to help get our patterns excited and going
out and creating very cool applications.
So I'm going to go through eight, hopefully nine, if I can get this video across the
mesh. I like living life on the edge, so we'll see how it goes.
So the first slide I'm going to show is application written by msnbc.com called
Bridge Tracker. And if you remember some time ago, there was this whole stink
after that bridge in Minnesota collapsed, so -- look at this. Here we go. Resolve
that. So these guys went and pulled all the bridge data from the Washington
State Department of Transportation and then built this little interactive map using
Virtual Earth.
And if you could see here, they've pulled all this data in, I think they're using a
KML file to pull all this stuff in. But you can go and check out all the dangerous
bridges if you happen to live in downtown Seattle that you have to drive to get all
the way to work every morning.
So if you want to feel good about your commute, maybe take a boat. (Laughter)
The second one I want to show is a company called John L. Scott. So we do a
lot of work in the real estate industry. As you can see, it's very much a natural fit
to use a map to display what properties you have for sale and then to find out
where they are.
So these guys did some very cool work and what they did is they created this
little neighborhood wizard. So what you can do is let's say I want to live
somewhere between Juanita Bay and downtown Kirkland and I want to just kind
of be along the water here.
So what I can do is create a little polygon right here and then just using our
shape objects -- whoops, I want to cancel that -- and then I can just get the
homes within that area. Right now that's a good thing given that there's a lot of
houses for sale right now on the market, so if you can narrow it down a little bit,
definitely a very good thing. And so they've just got a little bubble right here, you
can get a view from the property rather than a view of the property. So basic
information and you can click on property details.
The second one I want to show is a partner application. So we do a lot of work
with the ESRI, if you're not familiar with these guys, they are the 800 pound
gorilla in the GIS space.
They really like our interactive SDK so much so that they created their own that
was almost identical to what we've got. So what I'll do here is I'll run a little thing,
querying data using a buffer. So what they did, you've got a little thing here, I
could point that center -- whoops, center point there, and what I'll do is I'll create
a 300 foot buffer around it, and then basically querying off that data there, they
can then select all the parcels of land all the way around it that touch that buffer.
Next I'm going to show another little -- I'm actually getting through this pretty fast,
huh? This is good. I should have had more.
So another partner of ours, IDD Solutions, did something recently for British
Petroleum. These guys have a problem with oil rigs being out in the middle of
the Gulf of Mexico and hurricanes, right. So how do we manage our assets of oil
rigs out there or pipelines or whatever other physical asset we've got out there
and then basically use maps to help with decision tree support our decision
support here.
So this is using a whole bunch of different shape objects. You can add all kinds
of stuff in here so I can turn off the radar or I can turn off other stuff or weather,
right, and they just layer this stuff on. What's nice is that I can look at historically
what hurricanes have come in and then find out what the basic track of those
things are, figure out with that type of hurricane do I need to get some people off
my oil rigs out there or shut them down. You can also float over these things and
get an idea of the different assets they've got out there and what their status is.
The next one I'm going to show you is this New York City Transit. Oh, let's see.
Give me an address or a place down lower Manhattan you guys like. Come on,
don't be shy. None of you guys go to Manhattan? Okay.
Lower Manhattan. And then we'll go to Times Square. These guys used our
MapCruncher software. Whoops. Northern end, no I want the southern end.
There we go. So what they can do is help you with all the transit information
getting around Manhattan. The nice thing is they used our MapCruncher
application, where is it, oh, there's that, where's all the transit routes? Okay.
Well, New York City's having a little issue here today. In short, what these guys
did is they used MapCruncher and then basically overlaid all of the bus and trains
around Manhattan right there so that you knew exactly how to get to -- here, let's
try this. No. Okay. And I've got five minutes.
Another example of MapCruncher here is Portland Community College right
here. So these guys took a (inaudible) image of their Portland Community
College campus and then simply laid it right over. So now you can get an idea of
parking structures, the names of buildings here. They've got a very difficult to
read legend right off of here. (Laughter) I don't think they thought that one quite
through.
This is very interesting and I think useable, and when I talk to students a lot, I say
you know, this is a great example of how you could use our software and our
MapCruncher to go do something cool. This next one, any of you guys ride
motorcycles? No? Aren't we all into motorcycles here? Harley-Davidson
created a really cool piece of software called the Ride Planner and they're using
our map technology to allow people who are into biking -- God, I'm having a
really tough day today, aren't I?
Let's go to -- let's go down south. These guys use Virtual Earth and then overlaid
it with Flash that allow people to be able to create ride maps, right, so essentially
kind of simple with directions. They did something really cool with this, though,
but it's only cool if it works. (Laughter)
Okay. Well, I guess I'll just tell you. So what they did was they allowed users to
go in and then essentially drag your routes or drag the wave points or create
different wave points on wave so that you could change your route any way you
wanted, right, which is something we can't do with the core platform right now or
the core control. You have got to build that in.
The other nice thing they did is that said it's cool to pay for this stuff to help our
users but why not monetize it at the same time? So the other thing they did is
they wrote a deal with Best Western for hotels, so it will pop up Best Western
Hotels anywhere along your route there, and then also with fuel as well. So
there's some gas stations all along your route right there.
So they're helping to monetize using co-marketing, right, this is a great example
of an end customer saying, yeah, this is great we want to add more value to our
customers, we could add even more in and probably pay for this whole thing by
just getting these other companies to participate with us.
Okay. Next one. Next one I want to show you is, I should get my little xBox
controller here plugged into this thing, is iFly. These guys created a little
application -- is this thing going to work for me, there you go -- a way to track
flights. So what you can do is you just type in, I kind of moved a past it because
this site is a little slow. You can go in, on the plane, then you can go get yourself
a little wing-man view there, all right, or a little cockpit view. The nice thing
about -- I guess not nice thing about this demo is this thing isn't really very
interactive, these planes move a little slow, so I tend to speed them up a little bit,
you know.
But it's a really cool use of our Virtual Earth platform and its 3D capabilities in
here. And that's my last one unless I've got this. Hopefully the sound is working.
>>: For the first time ever a tornado rips its way through the heart of downtown
Atlanta. This is how the radar looked at 10:00 o'clock eastern last night. Winds
registered at 120 miles an hour. Let's use ->> Mark Brown: So this is our other piece, using Virtual Earth and media
entertainment.
>>: The path of the tornado and the damage. Phillips Arena downtown, look at
that facade just blown to bits. Meanwhile CNN Center, windows blown out, the
top of the roof has a big hole in it, as well as Olympic Park. Look at this, two
65-foot light towers brought down to the ground. Limited access because of ->> Mark Brown: Using our shape object.
>>: Uprooted, debris everywhere downtown. The police are rerouting traffic,
they will probably do so ->> Mark Brown: More shapes.
>>: -- for the next couple days ahead. Meanwhile the SCC basketball
tournament was being played at the Georgiadome. They sustained damage over
there at the roof, so now it will be played right over here at Georgia Tech
University. That arena only ->> Mark Brown: Push pins.
>>: Family, friends, and credentialed people will be the ones that get into this
event. Fans that won't get in will now get a refund. The game starts today at
noon.
>> Mark Brown: So these guys weather central have done a lot of great work
with us, and essentially what they do is they sell as a package this stuff here, so
if you've got a news story, breaking news story you can go in, you create the
push pin, little information pop-ups, you can create polygons around all that stuff.
Obviously with the weather maps, the same thing as well. Accuweather is
another partner of ours in this space as well that sells a platform using Virtual
Earth in there so that you can -- and in fact they're even doing cool stuff. So
they'll add in rain and snow. And Duncan talked about replacing the sky. These
guys are replacing the sky. So that when you go and look at a city in 3D, they're
actually throwing snow in there or clouds in there or rain in there. So that's it. I'm
all done. I got nine out of ten so.
(Applause)
>> Mark Brown: One question, half a question?
>>: That was a nice job on a tough challenge. Time for one question. I see one
in the back.
>>: Yeah. Since you mentioned that ESRI that you do some partnership with
them, are they also a competitor as is MapInfo and Intergraph or is it just Google
that you guys are concentrating on at this point?
>> Mark Brown: I don't consider them a competitor, right, I mean their target is
the hard gore GIS guys, right? We're just a platform provider is the way I kind of
view that relationship. I don't think they view us as a competitor, really, so in fact
we're loading, we've now got something with putting Virtual Earth inside their
product as well, right, so helping to deploy even more Virtual Earth. So for them,
that's the -- I think that's all goodness and for us as well. Okay. Thank you.
(Applause)
>>: Our last talk from this session is by Vincent Tao. He's going to answer is
question is spacial special?
>> Vincent Tao: I show you the spacial special, and so this is my 2008 edition
and many of you aware of my past history sort of been working with academia
about ten years as a professor and before I move into the business so my
thinking is always a mix of academic arrogance and the business randomness.
I can first talk about the spacial 2007 if you haven't been sort of talk about this
one, I sort of list about top ten innovations in geospacial technology, and feel free
to check my personal blog site about spacial.
Okay. Today I want to talk about is the location and be thinking about this one in
the last year. First started with Web 1.0 location. What is Web 1.0? It is a test
of the distance. And the Web is very much transparent as no location sort of a
concept in the Web there.
And the social network is real independent, you know, the distance there and we
can communicate over the network regardless of where you are. And
e-Commerce no restriction to the location, and that's the success of eBay,
Amazon, you can sell stuff on the Web and regardless of where those books
coming from or the things is coming from. So this is no restriction to the location.
And also the search is really the universal, you can find the stuff over the Web
and you did not pay much attention about where those material, those
documents, those company where they are coming from, you can do business
with them.
This whole phenomena is really about we call the test of the distance. Now Web
2.0. What is Web 2.0? In my view of this is as emergence of the location. And
the first thing came out is we just do realize that as Web itself physically is still
spacially organized, you know, IP address, stuff like that, and the people start to
really crunch in those IP address, people where you coming from to start to
analyze those BIC, business and the consumer intelligence data to understand
where they're coming from, actually, you know, many of those companies,
including Microsoft done pretty good job in terms of analyze those data sets.
And the local search is really hot, and a mapping, Virtual Earth, you know, the
stuff that will (inaudible) that's really hot stuff now. And location based
advertising. I just mentioned about that. And we start to see many of those, the
location based advertising and a targeted audience, for example Yahoo is a
(inaudible) program has been built for the last two years and the one primary
driver is as a location.
And air commerce started kicking out, and then we eventually find out 80 percent
of our business is actually about your five kilometers of your proximity and just
looking at the stuff you buy, you, shop and this case where from, just from your
proximity. You start to target those long tail business that's where we call air
commerce or local commerce.
All right. That's finally we start to understand what actually this location is really
about. It's about something we call the -- it's a dimensional index to organize the
information. You know, Google and others always talk about I like to organize
information universally, you know, your data. But looking at those data content,
the location is one dimensional. A view of that is one slice or in other ways
indexing of it. But unfortunately our current indexing search engine is really
based on the key words and the location is not just part of it, and normally we
have treat location as one key word but that's not correct.
And if you're coming from the mapping world, we always talk about let's organize
all this geospacial information, organize spacial information. But the real, real
notion now is organize your information spacially, spacially.
And this leads me to thinking about where actually moving from W3 to W4, which
is what, when, where, and who. But looking at the search engine these days
we're doing, we're basically just barely working out this what, which is key words
based search, even not at the level of the semantic yet. Do we have a really
strong where based search or when based search or even who based search.
And actually I see several startups now actually just sort of building those sort of
people search engines, and we do, you know, find put your Vincent Tao, you
actually come find -- oh, interesting find my history about my Vincent Tao, you
know, where it is coming from, where he do school stuff. Those are incredible,
those searching engine.
But again, we're still at a quite early stage. But this notion is getting, getting very
clear now in terms of moving from the W3 to W4. All right. On business we
spend a lot of money on the location. I really want to start always being asked
people, people you ask me about the question, what is killer app location? What
is killer app there? And I ask myself, what is killer app of my watch here? A
clock. It's still the time. Time is just killer app of your watch and is a primary
function of it. So looking at the location, so my topic here is where is the where
here? And online mapping clearly that is one of just killer app of the location.
But if looking at major activities or we called entry point of all of us here in the
online world is about a search, is about a portal, is about a community which is
social networks like Facebook, My Space stuff and entertainment, gaming,
commerce, like eBay, Amazon, or the shopping sites, and communication, like
Hot Mail, Messenger, all the stuff, email things. So basically there just about
seven entry points.
You probably can tell me another one but I don't know yet, those are seven major
critical entry points for all of us in this room I believe. Sometimes there's some
site sort of a combination of those different activities, but those are major
activities of the drive the online user community these days. And where is the
location?
And I can see here the online mapping is one of very clearly the destination or
killer application for the location. But among the rest of them, location is just one
element of it. Location is not a driver. But location empowers many of those
sites, activities. But I want to make it clear again it's not the driver. It's not the
driver. Just like you design your watch just for the clock -- I mean for the timing
and everything else different.
Okay. So in business I been thinking about always two questions. Among those
activities what actually location -- local information people really looking for? But
from search engine perspective is what information indexed by location people
are looking for, just subtle difference here but does make a big, big difference in
terms of investment is about what actual things you are looking for.
So I summarize this sort of in this five, you know, buckets here by looking at the
location what basic people looking for. And number one, locate this basic
general mapping, find address, place, get a map stuff like that. That's everyone
that stand.
And the second one is as you move into sort of portal experience, sort of explore
kind of things, a city guide towards, you know, trip planning, browsing, you know,
fantastic, those visual stuff, that really helps you to sort of give you sort of kind of
a local guide or vacation planning things. And this is more like a portal
experience. And next one, of course, is it is really fine which is a search we talk
about find a business and search restaurant, hotels, real estate, buy and find
news for example and even find jobs. That's all relate to this local. But local just
part of it, this is what we call the local search.
Commerce we're still at early stage of it. For example, find buyers, find sellers,
you know, have cars in my second year old car I want to sell it, can I find those
buyers or find those sellers of course. Products, research restaurant even
booking a movie ticket can I do all of those things, all local activities. We're still
early of it. We're still early of it in terms of local commerce.
Community, even early, for example, shared my favorite fishing spots I want to
find hikers this weekend, go to hiking. Or find interest groups through social
networks.
Location based the social graph, now is getting lot of attention now these days
and people try to hack this, you know, Facebook, social network to find out the
location among those connections. But again early stage of it. And we're moving
sort of more on a mapping side and do a lot more on the local guides now, and
it's local search, searching that something we have invest a lot but the rest of
them and still at early stage. We have not really got a very bad understanding of
all of this yet.
So that's sort of first question is what are people looking for. I'm pretty sure you
can expand this list to a very long list here.
The second question to me is how actually people look for this information. Of
course I'm talk about online world here. And this is all study, you probably many
of you interested, this is more on the business side of things, is looking at,
looking at a business, we really interested is about people queries. We use the
term queries, basically how many times you want to search, you want to find
those. Is that something you want to do on a daily basis, on an hourly basis or
just weekly basis, for example?
So based on our market study here and people find information still from sort of
yellow page, newspaper or, you know, all kind of those sort of traditional media,
that's still about 21 percentage and which is no surprise to me, this of course
U.S. situation. But you can see any growth is SG, decline about .7 percentage
now. But looking at online world, which is PC query, that's basically, you can
through the PC and go online, and if you find those local information, that's a
mention about like jobs, you know, movie, all of those local stuff is about 50
building queries, 50 building queries. Again, this is city queries.
And the growth ratio is about 22 percent on the Website and but the whole pie,
whole piece of this pie here you can see, this junction is about 71 percentage this
is very large. People go online to find those information.
How about a voice? That's still sort of how people, many people used to that is
114, 411 in this countries, you sort of try to find the local information use the 114
services, and that's still about five percentage using the phone, using the phone.
And a phone data now is coming basically you use your mobile phone and go
online and try to find those local information. And that's still at early stage sort of
two percent right now you are looking at. But a growth ratio is just incredible, 71
percentage through the phone, through the data plan.
In vehicle is coming, in vehicle. People start it because many times actually you
try to find those local information in the vehicle. For example, you have
appointment with your dentist and when you drive to the dentist office and you
suddenly where is the office located, you sort of get lost, then you really need to
find those information and from those local, or you are hanging around you
wanted to find where is the closest movie theater can kill my time for the extra
half hour stuff.
So in vehicle, but now at a very early stage again, one percent. Growth ratio, 20
percent. Again, this is the U.S. market. And some international regions may
have very different view in terms of entries.
But good things for us to analyze those is really to understand how many actual
queries we have. Overall, all this local stuff altogether now our estimation is
about a 70 billion queries. If our use our standard marketing model and
modernization model is one to five percent of those queries can be converted to
click through, meaning that we can generate on anywhere from 50 cents to one
dollar to five dollar per query, you actually can estimate how much revenue you
can generate from doing this local work, doing this local search work. Sorry,
local query work.
Again, let's think about what's Web 3.0 and in the next few years down the road,
of course we have invest tremendously, you have seen on the virtual site of
building those awesome those 3D stuff. What is that Web 3.0? So many
versions of it and just myself have many versions, and I just give you one version
today about is a burst of virtual world, a burst of the virtual world.
And image that everything is trackable. Of course these days we really want to
tag everything, tracking those things. This is actually (inaudible) family their first
family sort of here to implant ID in their body, basically in their hands. If you go
through x-ray you can see here there's a little dot here, and that's RFID tag. So
this family will be basically saved. They've been tracked, they've been located
and 24-7 by this. 97 you know, was incredible actually was shown, I using this
slide before, but this is and how possible I could have something to embed in my
body.
You know what, myself change my mind, and about two months ago when I did
my first operation which I implant my tooth in my body. I found oh, you know, this
is a tooth, and implant a lot of people do tooth implant, and I find, hey, that's very
natural. If you put some stuff there, I would not mind. So I don't know what's
going to be in the future. I have no idea. Maybe many of us will change your
minds as well if you feel they are safer, if you feel that's better, or you don't have
to implant, maybe there's some other ways to do it. For example, if you have
those glasses that, you know, you can stick the RFID and you be tracked or
even part of your phones. So many ways of that.
But the notion here is the more and more stuff will get geotracked. I believe
there's a lot of researcher in this room and the developers. Once things have
been tracked or tagged they're just enormous opportunity for you to do the
business, for you to do the services with this -- with the geotagging.
With the 3D world what's going to happen? And people was joking with me,
wow, the traffic is a headache, I could not solve the problem. I also joking with
them. I say you know what, I have no problem, I have three easy ways to solve
the world traffic problem. And one version of it, I have several version of solving
the traffic problem, one version of it is of course related to our 3D Virtual Earth
that we are building 3D world. Think about it. If your cars have GPS, that's very
easy, have some tracking, GPS being tracked. And if the world is modelled is
three dimensional if I want to go from A to B, what I need to do, I can use my
phone, tie to a service, I say I want to go to Space Needle and starting from 3:00
and I want to get there at 3:30, you know, I can remote control your car because
the model, because the world is a modelled because -- and the car can be
tracked I can just no need a driver, I can just get the car from A to B. Do we
need the traffic problem? No because everything is managed by this traffic fleet
management center, you know, sending the car from here to there. So you just
make sure you have the right time, get on your car because the car going to kick
out automatically by the fleet management center here. So traffic, no problem.
Easy solving.
And amazingly that's the Internet I think that we thought about hey, Internet has
been ten years now and nothing is fancy, everything is there now. I find it's
actually really early stage we start to really understand the Internet world now. I
find my kids spend three, four hours on the Internet, and I can't imagine what
going to be like her life in the future down the road how much time you going to
spend there. So in other words, his life can be really be virtual. Like this, for
example, the guy you can play in the Virtual Life like this.
And just give you quick, this is a Chuckie, and he's actually really like the guitar
work and now he's a rock star in the Virtual Life, actually in the Second Life. And
this is lady called Katie and she dreamed to be a model but some reason she
was not select as a model. That's her first life. But in the Second Life, she build
her own avatar and now become the top model in the Second Life. Last year
Playboy select 100 models in the world. There's one model which is not human,
that's this digital model actually in the Virtual World. So things that happen about
really the Virtual Life things and I feel that our investment and we're still looking
at how I should really invest the stuff.
And before standing before your lunch, I'd like to thank you for your attention.
Thank you.
(Applause)
>> Vincent Tao: Do we have time for one question?
>>: We'll take one question anyone happens to have one? There's a lot of
material there. Okay. I see one.
>>: So what are the hard computer science problems in this vision? Is it just
busy work, digitizing things, or is there real computing there?
>> Vincent Tao: It's basically of course it's many of those ones. You have seen
many of those presentations today. And anywhere from the tagging, from
understanding, from search, from indexing, and fundamentally I think you'll be
looking at the I imagine about as 4W series here, fundamentally is organizing of
the data by your location as one dimension. And, you know, if you deal with
mapping before, we always -- that key word is one dimension, and mapping is
actually two dimension because you have to get your X, Y, Z coordinated with
your semantics, with your semantics all together. That's actually challenging
part.
We've been doing that for a year. By the way, actually have no location based
really search engine been built for that case.
>>: Okay. Thanks again, Vincent.
>> Vincent Tao: Thank you.
(Applause)
Download